Test Report: KVM_Linux_crio 18703

                    
                      817bcb10c8415237264ed1ad2e32746beadbf0a3:2024-04-20:34116
                    
                

Test fail (32/311)

Order failed test Duration
30 TestAddons/parallel/Ingress 154.7
32 TestAddons/parallel/MetricsServer 322.5
44 TestAddons/StoppedEnableDisable 154.44
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.59
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.18
163 TestMultiControlPlane/serial/StopSecondaryNode 142.09
165 TestMultiControlPlane/serial/RestartSecondaryNode 56.58
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 365.99
170 TestMultiControlPlane/serial/StopCluster 142.09
230 TestMultiNode/serial/RestartKeepsNodes 305.81
232 TestMultiNode/serial/StopMultiNode 141.65
239 TestPreload 277.34
247 TestKubernetesUpgrade 443.94
284 TestPause/serial/SecondStartNoReconfiguration 56.61
312 TestStartStop/group/old-k8s-version/serial/FirstStart 327.71
338 TestStartStop/group/no-preload/serial/Stop 139.14
341 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.17
344 TestStartStop/group/embed-certs/serial/Stop 139.04
345 TestStartStop/group/old-k8s-version/serial/DeployApp 0.5
346 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 95.78
347 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
350 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
355 TestStartStop/group/old-k8s-version/serial/SecondStart 770.88
356 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.39
357 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.24
358 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.22
359 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.39
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 412.66
361 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 367.89
362 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 202.24
363 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 63.49
x
+
TestAddons/parallel/Ingress (154.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-903502 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-903502 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-903502 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [357d421a-b251-4370-be01-0a523ab9c08b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [357d421a-b251-4370-be01-0a523ab9c08b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.006212915s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-903502 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-903502 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.057212431s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-903502 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-903502 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.36
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-903502 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-903502 addons disable ingress-dns --alsologtostderr -v=1: (2.434921563s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-903502 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-903502 addons disable ingress --alsologtostderr -v=1: (7.846218233s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-903502 -n addons-903502
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-903502 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-903502 logs -n 25: (1.422684074s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-740714 | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC |                     |
	|         | -p download-only-740714                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC | 19 Apr 24 23:57 UTC |
	| delete  | -p download-only-740714                                                                     | download-only-740714 | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC | 19 Apr 24 23:57 UTC |
	| delete  | -p download-only-347670                                                                     | download-only-347670 | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC | 19 Apr 24 23:57 UTC |
	| delete  | -p download-only-740714                                                                     | download-only-740714 | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC | 19 Apr 24 23:57 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-466470 | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC |                     |
	|         | binary-mirror-466470                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39973                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-466470                                                                     | binary-mirror-466470 | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC | 19 Apr 24 23:57 UTC |
	| addons  | disable dashboard -p                                                                        | addons-903502        | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC |                     |
	|         | addons-903502                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-903502        | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC |                     |
	|         | addons-903502                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-903502 --wait=true                                                                | addons-903502        | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC | 20 Apr 24 00:00 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-903502 ssh cat                                                                       | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:00 UTC | 20 Apr 24 00:00 UTC |
	|         | /opt/local-path-provisioner/pvc-1c22513e-d65d-44a6-87f2-b75cdb5b79eb_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-903502 addons disable                                                                | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:00 UTC | 20 Apr 24 00:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:00 UTC | 20 Apr 24 00:00 UTC |
	|         | -p addons-903502                                                                            |                      |         |         |                     |                     |
	| ip      | addons-903502 ip                                                                            | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:00 UTC | 20 Apr 24 00:00 UTC |
	| addons  | addons-903502 addons disable                                                                | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:00 UTC | 20 Apr 24 00:00 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-903502 addons disable                                                                | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:00 UTC | 20 Apr 24 00:00 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:01 UTC | 20 Apr 24 00:01 UTC |
	|         | addons-903502                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:01 UTC | 20 Apr 24 00:01 UTC |
	|         | -p addons-903502                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:01 UTC | 20 Apr 24 00:01 UTC |
	|         | addons-903502                                                                               |                      |         |         |                     |                     |
	| addons  | addons-903502 addons                                                                        | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:01 UTC | 20 Apr 24 00:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-903502 ssh curl -s                                                                   | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:01 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-903502 addons                                                                        | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:01 UTC | 20 Apr 24 00:01 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-903502 ip                                                                            | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:03 UTC | 20 Apr 24 00:03 UTC |
	| addons  | addons-903502 addons disable                                                                | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:03 UTC | 20 Apr 24 00:03 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-903502 addons disable                                                                | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:03 UTC | 20 Apr 24 00:03 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 23:57:22
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 23:57:22.839259   84284 out.go:291] Setting OutFile to fd 1 ...
	I0419 23:57:22.839450   84284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 23:57:22.839458   84284 out.go:304] Setting ErrFile to fd 2...
	I0419 23:57:22.839466   84284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 23:57:22.840127   84284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0419 23:57:22.840863   84284 out.go:298] Setting JSON to false
	I0419 23:57:22.841798   84284 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9590,"bootTime":1713561453,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 23:57:22.841862   84284 start.go:139] virtualization: kvm guest
	I0419 23:57:22.844247   84284 out.go:177] * [addons-903502] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 23:57:22.845734   84284 out.go:177]   - MINIKUBE_LOCATION=18703
	I0419 23:57:22.845788   84284 notify.go:220] Checking for updates...
	I0419 23:57:22.847106   84284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 23:57:22.848690   84284 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0419 23:57:22.850185   84284 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0419 23:57:22.851688   84284 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0419 23:57:22.853127   84284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 23:57:22.854616   84284 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 23:57:22.885006   84284 out.go:177] * Using the kvm2 driver based on user configuration
	I0419 23:57:22.886402   84284 start.go:297] selected driver: kvm2
	I0419 23:57:22.886414   84284 start.go:901] validating driver "kvm2" against <nil>
	I0419 23:57:22.886425   84284 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 23:57:22.887075   84284 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 23:57:22.887141   84284 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0419 23:57:22.901412   84284 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0419 23:57:22.901457   84284 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 23:57:22.901652   84284 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 23:57:22.901703   84284 cni.go:84] Creating CNI manager for ""
	I0419 23:57:22.901716   84284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 23:57:22.901722   84284 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 23:57:22.901775   84284 start.go:340] cluster config:
	{Name:addons-903502 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-903502 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0419 23:57:22.901867   84284 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 23:57:22.903520   84284 out.go:177] * Starting "addons-903502" primary control-plane node in "addons-903502" cluster
	I0419 23:57:22.904770   84284 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 23:57:22.904803   84284 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0419 23:57:22.904813   84284 cache.go:56] Caching tarball of preloaded images
	I0419 23:57:22.904874   84284 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0419 23:57:22.904885   84284 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 23:57:22.905168   84284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/config.json ...
	I0419 23:57:22.905186   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/config.json: {Name:mk048214cc8bc5762238f2ad20bad9492a64d565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:22.905301   84284 start.go:360] acquireMachinesLock for addons-903502: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 23:57:22.905370   84284 start.go:364] duration metric: took 42.43µs to acquireMachinesLock for "addons-903502"
	I0419 23:57:22.905391   84284 start.go:93] Provisioning new machine with config: &{Name:addons-903502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-903502 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 23:57:22.905450   84284 start.go:125] createHost starting for "" (driver="kvm2")
	I0419 23:57:22.907019   84284 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0419 23:57:22.907190   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:57:22.907229   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:57:22.920589   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37663
	I0419 23:57:22.921090   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:57:22.921708   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:57:22.921728   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:57:22.922096   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:57:22.922243   84284 main.go:141] libmachine: (addons-903502) Calling .GetMachineName
	I0419 23:57:22.922400   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:57:22.922504   84284 start.go:159] libmachine.API.Create for "addons-903502" (driver="kvm2")
	I0419 23:57:22.922542   84284 client.go:168] LocalClient.Create starting
	I0419 23:57:22.922574   84284 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem
	I0419 23:57:23.148758   84284 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem
	I0419 23:57:23.250414   84284 main.go:141] libmachine: Running pre-create checks...
	I0419 23:57:23.250438   84284 main.go:141] libmachine: (addons-903502) Calling .PreCreateCheck
	I0419 23:57:23.250930   84284 main.go:141] libmachine: (addons-903502) Calling .GetConfigRaw
	I0419 23:57:23.251381   84284 main.go:141] libmachine: Creating machine...
	I0419 23:57:23.251398   84284 main.go:141] libmachine: (addons-903502) Calling .Create
	I0419 23:57:23.251550   84284 main.go:141] libmachine: (addons-903502) Creating KVM machine...
	I0419 23:57:23.252846   84284 main.go:141] libmachine: (addons-903502) DBG | found existing default KVM network
	I0419 23:57:23.253681   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:23.253525   84322 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0419 23:57:23.253736   84284 main.go:141] libmachine: (addons-903502) DBG | created network xml: 
	I0419 23:57:23.253754   84284 main.go:141] libmachine: (addons-903502) DBG | <network>
	I0419 23:57:23.253768   84284 main.go:141] libmachine: (addons-903502) DBG |   <name>mk-addons-903502</name>
	I0419 23:57:23.253774   84284 main.go:141] libmachine: (addons-903502) DBG |   <dns enable='no'/>
	I0419 23:57:23.253780   84284 main.go:141] libmachine: (addons-903502) DBG |   
	I0419 23:57:23.253794   84284 main.go:141] libmachine: (addons-903502) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0419 23:57:23.253813   84284 main.go:141] libmachine: (addons-903502) DBG |     <dhcp>
	I0419 23:57:23.253825   84284 main.go:141] libmachine: (addons-903502) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0419 23:57:23.253831   84284 main.go:141] libmachine: (addons-903502) DBG |     </dhcp>
	I0419 23:57:23.253835   84284 main.go:141] libmachine: (addons-903502) DBG |   </ip>
	I0419 23:57:23.253840   84284 main.go:141] libmachine: (addons-903502) DBG |   
	I0419 23:57:23.253848   84284 main.go:141] libmachine: (addons-903502) DBG | </network>
	I0419 23:57:23.253853   84284 main.go:141] libmachine: (addons-903502) DBG | 
	I0419 23:57:23.258864   84284 main.go:141] libmachine: (addons-903502) DBG | trying to create private KVM network mk-addons-903502 192.168.39.0/24...
	I0419 23:57:23.321019   84284 main.go:141] libmachine: (addons-903502) DBG | private KVM network mk-addons-903502 192.168.39.0/24 created
	I0419 23:57:23.321052   84284 main.go:141] libmachine: (addons-903502) Setting up store path in /home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502 ...
	I0419 23:57:23.321072   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:23.320962   84322 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0419 23:57:23.321111   84284 main.go:141] libmachine: (addons-903502) Building disk image from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0419 23:57:23.321129   84284 main.go:141] libmachine: (addons-903502) Downloading /home/jenkins/minikube-integration/18703-76456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0419 23:57:23.565891   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:23.565746   84322 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa...
	I0419 23:57:23.668632   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:23.668446   84322 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/addons-903502.rawdisk...
	I0419 23:57:23.668672   84284 main.go:141] libmachine: (addons-903502) DBG | Writing magic tar header
	I0419 23:57:23.668695   84284 main.go:141] libmachine: (addons-903502) DBG | Writing SSH key tar header
	I0419 23:57:23.668709   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:23.668659   84322 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502 ...
	I0419 23:57:23.668828   84284 main.go:141] libmachine: (addons-903502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502
	I0419 23:57:23.668851   84284 main.go:141] libmachine: (addons-903502) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502 (perms=drwx------)
	I0419 23:57:23.668859   84284 main.go:141] libmachine: (addons-903502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines
	I0419 23:57:23.668866   84284 main.go:141] libmachine: (addons-903502) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines (perms=drwxr-xr-x)
	I0419 23:57:23.668876   84284 main.go:141] libmachine: (addons-903502) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube (perms=drwxr-xr-x)
	I0419 23:57:23.668882   84284 main.go:141] libmachine: (addons-903502) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456 (perms=drwxrwxr-x)
	I0419 23:57:23.668890   84284 main.go:141] libmachine: (addons-903502) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0419 23:57:23.668898   84284 main.go:141] libmachine: (addons-903502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0419 23:57:23.668904   84284 main.go:141] libmachine: (addons-903502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456
	I0419 23:57:23.668924   84284 main.go:141] libmachine: (addons-903502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0419 23:57:23.668946   84284 main.go:141] libmachine: (addons-903502) DBG | Checking permissions on dir: /home/jenkins
	I0419 23:57:23.668958   84284 main.go:141] libmachine: (addons-903502) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0419 23:57:23.668968   84284 main.go:141] libmachine: (addons-903502) Creating domain...
	I0419 23:57:23.668979   84284 main.go:141] libmachine: (addons-903502) DBG | Checking permissions on dir: /home
	I0419 23:57:23.668991   84284 main.go:141] libmachine: (addons-903502) DBG | Skipping /home - not owner
	I0419 23:57:23.670158   84284 main.go:141] libmachine: (addons-903502) define libvirt domain using xml: 
	I0419 23:57:23.670186   84284 main.go:141] libmachine: (addons-903502) <domain type='kvm'>
	I0419 23:57:23.670205   84284 main.go:141] libmachine: (addons-903502)   <name>addons-903502</name>
	I0419 23:57:23.670232   84284 main.go:141] libmachine: (addons-903502)   <memory unit='MiB'>4000</memory>
	I0419 23:57:23.670246   84284 main.go:141] libmachine: (addons-903502)   <vcpu>2</vcpu>
	I0419 23:57:23.670253   84284 main.go:141] libmachine: (addons-903502)   <features>
	I0419 23:57:23.670269   84284 main.go:141] libmachine: (addons-903502)     <acpi/>
	I0419 23:57:23.670287   84284 main.go:141] libmachine: (addons-903502)     <apic/>
	I0419 23:57:23.670312   84284 main.go:141] libmachine: (addons-903502)     <pae/>
	I0419 23:57:23.670334   84284 main.go:141] libmachine: (addons-903502)     
	I0419 23:57:23.670358   84284 main.go:141] libmachine: (addons-903502)   </features>
	I0419 23:57:23.670377   84284 main.go:141] libmachine: (addons-903502)   <cpu mode='host-passthrough'>
	I0419 23:57:23.670395   84284 main.go:141] libmachine: (addons-903502)   
	I0419 23:57:23.670416   84284 main.go:141] libmachine: (addons-903502)   </cpu>
	I0419 23:57:23.670431   84284 main.go:141] libmachine: (addons-903502)   <os>
	I0419 23:57:23.670444   84284 main.go:141] libmachine: (addons-903502)     <type>hvm</type>
	I0419 23:57:23.670454   84284 main.go:141] libmachine: (addons-903502)     <boot dev='cdrom'/>
	I0419 23:57:23.670466   84284 main.go:141] libmachine: (addons-903502)     <boot dev='hd'/>
	I0419 23:57:23.670480   84284 main.go:141] libmachine: (addons-903502)     <bootmenu enable='no'/>
	I0419 23:57:23.670504   84284 main.go:141] libmachine: (addons-903502)   </os>
	I0419 23:57:23.670518   84284 main.go:141] libmachine: (addons-903502)   <devices>
	I0419 23:57:23.670531   84284 main.go:141] libmachine: (addons-903502)     <disk type='file' device='cdrom'>
	I0419 23:57:23.670561   84284 main.go:141] libmachine: (addons-903502)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/boot2docker.iso'/>
	I0419 23:57:23.670579   84284 main.go:141] libmachine: (addons-903502)       <target dev='hdc' bus='scsi'/>
	I0419 23:57:23.670592   84284 main.go:141] libmachine: (addons-903502)       <readonly/>
	I0419 23:57:23.670603   84284 main.go:141] libmachine: (addons-903502)     </disk>
	I0419 23:57:23.670617   84284 main.go:141] libmachine: (addons-903502)     <disk type='file' device='disk'>
	I0419 23:57:23.670631   84284 main.go:141] libmachine: (addons-903502)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0419 23:57:23.670653   84284 main.go:141] libmachine: (addons-903502)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/addons-903502.rawdisk'/>
	I0419 23:57:23.670667   84284 main.go:141] libmachine: (addons-903502)       <target dev='hda' bus='virtio'/>
	I0419 23:57:23.670676   84284 main.go:141] libmachine: (addons-903502)     </disk>
	I0419 23:57:23.670682   84284 main.go:141] libmachine: (addons-903502)     <interface type='network'>
	I0419 23:57:23.670691   84284 main.go:141] libmachine: (addons-903502)       <source network='mk-addons-903502'/>
	I0419 23:57:23.670696   84284 main.go:141] libmachine: (addons-903502)       <model type='virtio'/>
	I0419 23:57:23.670703   84284 main.go:141] libmachine: (addons-903502)     </interface>
	I0419 23:57:23.670708   84284 main.go:141] libmachine: (addons-903502)     <interface type='network'>
	I0419 23:57:23.670715   84284 main.go:141] libmachine: (addons-903502)       <source network='default'/>
	I0419 23:57:23.670719   84284 main.go:141] libmachine: (addons-903502)       <model type='virtio'/>
	I0419 23:57:23.670725   84284 main.go:141] libmachine: (addons-903502)     </interface>
	I0419 23:57:23.670730   84284 main.go:141] libmachine: (addons-903502)     <serial type='pty'>
	I0419 23:57:23.670738   84284 main.go:141] libmachine: (addons-903502)       <target port='0'/>
	I0419 23:57:23.670742   84284 main.go:141] libmachine: (addons-903502)     </serial>
	I0419 23:57:23.670750   84284 main.go:141] libmachine: (addons-903502)     <console type='pty'>
	I0419 23:57:23.670760   84284 main.go:141] libmachine: (addons-903502)       <target type='serial' port='0'/>
	I0419 23:57:23.670768   84284 main.go:141] libmachine: (addons-903502)     </console>
	I0419 23:57:23.670772   84284 main.go:141] libmachine: (addons-903502)     <rng model='virtio'>
	I0419 23:57:23.670781   84284 main.go:141] libmachine: (addons-903502)       <backend model='random'>/dev/random</backend>
	I0419 23:57:23.670785   84284 main.go:141] libmachine: (addons-903502)     </rng>
	I0419 23:57:23.670790   84284 main.go:141] libmachine: (addons-903502)     
	I0419 23:57:23.670796   84284 main.go:141] libmachine: (addons-903502)     
	I0419 23:57:23.670804   84284 main.go:141] libmachine: (addons-903502)   </devices>
	I0419 23:57:23.670809   84284 main.go:141] libmachine: (addons-903502) </domain>
	I0419 23:57:23.670819   84284 main.go:141] libmachine: (addons-903502) 
	I0419 23:57:23.675187   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:f5:e9:64 in network default
	I0419 23:57:23.675718   84284 main.go:141] libmachine: (addons-903502) Ensuring networks are active...
	I0419 23:57:23.675760   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:23.676396   84284 main.go:141] libmachine: (addons-903502) Ensuring network default is active
	I0419 23:57:23.676772   84284 main.go:141] libmachine: (addons-903502) Ensuring network mk-addons-903502 is active
	I0419 23:57:23.677194   84284 main.go:141] libmachine: (addons-903502) Getting domain xml...
	I0419 23:57:23.677850   84284 main.go:141] libmachine: (addons-903502) Creating domain...
	I0419 23:57:24.844776   84284 main.go:141] libmachine: (addons-903502) Waiting to get IP...
	I0419 23:57:24.845816   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:24.846189   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:24.846215   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:24.846169   84322 retry.go:31] will retry after 240.816363ms: waiting for machine to come up
	I0419 23:57:25.088865   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:25.089383   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:25.089431   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:25.089345   84322 retry.go:31] will retry after 283.575672ms: waiting for machine to come up
	I0419 23:57:25.374846   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:25.375271   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:25.375297   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:25.375216   84322 retry.go:31] will retry after 425.312228ms: waiting for machine to come up
	I0419 23:57:25.801682   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:25.802054   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:25.802076   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:25.802009   84322 retry.go:31] will retry after 407.959354ms: waiting for machine to come up
	I0419 23:57:26.211491   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:26.211946   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:26.211977   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:26.211906   84322 retry.go:31] will retry after 680.332989ms: waiting for machine to come up
	I0419 23:57:26.893729   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:26.894144   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:26.894177   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:26.894101   84322 retry.go:31] will retry after 574.715983ms: waiting for machine to come up
	I0419 23:57:27.471195   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:27.471737   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:27.471763   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:27.471660   84322 retry.go:31] will retry after 1.018392314s: waiting for machine to come up
	I0419 23:57:28.491524   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:28.491978   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:28.492012   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:28.491918   84322 retry.go:31] will retry after 1.121833343s: waiting for machine to come up
	I0419 23:57:29.615143   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:29.615514   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:29.615543   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:29.615455   84322 retry.go:31] will retry after 1.797582766s: waiting for machine to come up
	I0419 23:57:31.415437   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:31.415822   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:31.415857   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:31.415780   84322 retry.go:31] will retry after 1.441079659s: waiting for machine to come up
	I0419 23:57:32.857975   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:32.858423   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:32.858453   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:32.858373   84322 retry.go:31] will retry after 1.808645557s: waiting for machine to come up
	I0419 23:57:34.669892   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:34.670355   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:34.670388   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:34.670293   84322 retry.go:31] will retry after 2.46773113s: waiting for machine to come up
	I0419 23:57:37.141143   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:37.141677   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:37.141698   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:37.141638   84322 retry.go:31] will retry after 3.530647149s: waiting for machine to come up
	I0419 23:57:40.675702   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:40.676074   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:40.676103   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:40.676029   84322 retry.go:31] will retry after 4.808012141s: waiting for machine to come up
	I0419 23:57:45.486900   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.487349   84284 main.go:141] libmachine: (addons-903502) Found IP for machine: 192.168.39.36
	I0419 23:57:45.487374   84284 main.go:141] libmachine: (addons-903502) Reserving static IP address...
	I0419 23:57:45.487412   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has current primary IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.487704   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find host DHCP lease matching {name: "addons-903502", mac: "52:54:00:6a:a2:50", ip: "192.168.39.36"} in network mk-addons-903502
	I0419 23:57:45.558780   84284 main.go:141] libmachine: (addons-903502) DBG | Getting to WaitForSSH function...
	I0419 23:57:45.558814   84284 main.go:141] libmachine: (addons-903502) Reserved static IP address: 192.168.39.36
	I0419 23:57:45.558830   84284 main.go:141] libmachine: (addons-903502) Waiting for SSH to be available...
	I0419 23:57:45.561611   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.562153   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:45.562180   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.562377   84284 main.go:141] libmachine: (addons-903502) DBG | Using SSH client type: external
	I0419 23:57:45.562418   84284 main.go:141] libmachine: (addons-903502) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa (-rw-------)
	I0419 23:57:45.562464   84284 main.go:141] libmachine: (addons-903502) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0419 23:57:45.562486   84284 main.go:141] libmachine: (addons-903502) DBG | About to run SSH command:
	I0419 23:57:45.562503   84284 main.go:141] libmachine: (addons-903502) DBG | exit 0
	I0419 23:57:45.685138   84284 main.go:141] libmachine: (addons-903502) DBG | SSH cmd err, output: <nil>: 
	I0419 23:57:45.685384   84284 main.go:141] libmachine: (addons-903502) KVM machine creation complete!
	I0419 23:57:45.685693   84284 main.go:141] libmachine: (addons-903502) Calling .GetConfigRaw
	I0419 23:57:45.686226   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:57:45.686437   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:57:45.686610   84284 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0419 23:57:45.686630   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:57:45.687907   84284 main.go:141] libmachine: Detecting operating system of created instance...
	I0419 23:57:45.687952   84284 main.go:141] libmachine: Waiting for SSH to be available...
	I0419 23:57:45.687964   84284 main.go:141] libmachine: Getting to WaitForSSH function...
	I0419 23:57:45.687973   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:45.690289   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.690640   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:45.690662   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.690784   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:45.690970   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:45.691137   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:45.691255   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:45.691453   84284 main.go:141] libmachine: Using SSH client type: native
	I0419 23:57:45.691640   84284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0419 23:57:45.691651   84284 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0419 23:57:45.788819   84284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 23:57:45.788840   84284 main.go:141] libmachine: Detecting the provisioner...
	I0419 23:57:45.788847   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:45.791762   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.792106   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:45.792140   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.792268   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:45.792475   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:45.792624   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:45.792765   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:45.792921   84284 main.go:141] libmachine: Using SSH client type: native
	I0419 23:57:45.793074   84284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0419 23:57:45.793084   84284 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0419 23:57:45.890376   84284 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0419 23:57:45.890481   84284 main.go:141] libmachine: found compatible host: buildroot
	I0419 23:57:45.890491   84284 main.go:141] libmachine: Provisioning with buildroot...
	I0419 23:57:45.890500   84284 main.go:141] libmachine: (addons-903502) Calling .GetMachineName
	I0419 23:57:45.890771   84284 buildroot.go:166] provisioning hostname "addons-903502"
	I0419 23:57:45.890796   84284 main.go:141] libmachine: (addons-903502) Calling .GetMachineName
	I0419 23:57:45.891003   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:45.893656   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.894063   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:45.894112   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.894267   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:45.894436   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:45.894599   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:45.894786   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:45.894936   84284 main.go:141] libmachine: Using SSH client type: native
	I0419 23:57:45.895129   84284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0419 23:57:45.895145   84284 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-903502 && echo "addons-903502" | sudo tee /etc/hostname
	I0419 23:57:46.007479   84284 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-903502
	
	I0419 23:57:46.007515   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:46.010231   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.010583   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.010606   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.010817   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:46.011021   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.011195   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.011356   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:46.011548   84284 main.go:141] libmachine: Using SSH client type: native
	I0419 23:57:46.011716   84284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0419 23:57:46.011731   84284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-903502' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-903502/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-903502' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 23:57:46.118975   84284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 23:57:46.119000   84284 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0419 23:57:46.119018   84284 buildroot.go:174] setting up certificates
	I0419 23:57:46.119027   84284 provision.go:84] configureAuth start
	I0419 23:57:46.119035   84284 main.go:141] libmachine: (addons-903502) Calling .GetMachineName
	I0419 23:57:46.119325   84284 main.go:141] libmachine: (addons-903502) Calling .GetIP
	I0419 23:57:46.122046   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.122448   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.122484   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.122609   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:46.124832   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.125190   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.125220   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.125463   84284 provision.go:143] copyHostCerts
	I0419 23:57:46.125572   84284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0419 23:57:46.125764   84284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0419 23:57:46.125890   84284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0419 23:57:46.125985   84284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.addons-903502 san=[127.0.0.1 192.168.39.36 addons-903502 localhost minikube]
	I0419 23:57:46.214917   84284 provision.go:177] copyRemoteCerts
	I0419 23:57:46.214978   84284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 23:57:46.215002   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:46.217813   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.218117   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.218149   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.218301   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:46.218504   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.218642   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:46.218781   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:57:46.296979   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 23:57:46.322240   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0419 23:57:46.346920   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0419 23:57:46.371692   84284 provision.go:87] duration metric: took 252.65329ms to configureAuth
	I0419 23:57:46.371713   84284 buildroot.go:189] setting minikube options for container-runtime
	I0419 23:57:46.371896   84284 config.go:182] Loaded profile config "addons-903502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 23:57:46.371997   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:46.374816   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.375130   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.375201   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.375309   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:46.375535   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.375704   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.375945   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:46.376119   84284 main.go:141] libmachine: Using SSH client type: native
	I0419 23:57:46.376309   84284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0419 23:57:46.376328   84284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 23:57:46.639600   84284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 23:57:46.639630   84284 main.go:141] libmachine: Checking connection to Docker...
	I0419 23:57:46.639704   84284 main.go:141] libmachine: (addons-903502) Calling .GetURL
	I0419 23:57:46.641174   84284 main.go:141] libmachine: (addons-903502) DBG | Using libvirt version 6000000
	I0419 23:57:46.643979   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.644330   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.644352   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.644511   84284 main.go:141] libmachine: Docker is up and running!
	I0419 23:57:46.644523   84284 main.go:141] libmachine: Reticulating splines...
	I0419 23:57:46.644531   84284 client.go:171] duration metric: took 23.721978225s to LocalClient.Create
	I0419 23:57:46.644558   84284 start.go:167] duration metric: took 23.722053862s to libmachine.API.Create "addons-903502"
	I0419 23:57:46.644577   84284 start.go:293] postStartSetup for "addons-903502" (driver="kvm2")
	I0419 23:57:46.644587   84284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 23:57:46.644604   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:57:46.644868   84284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 23:57:46.644898   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:46.647177   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.647468   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.647493   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.647665   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:46.647832   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.647984   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:46.648098   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:57:46.728809   84284 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 23:57:46.733520   84284 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 23:57:46.733544   84284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0419 23:57:46.733597   84284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0419 23:57:46.733615   84284 start.go:296] duration metric: took 89.033019ms for postStartSetup
	I0419 23:57:46.733653   84284 main.go:141] libmachine: (addons-903502) Calling .GetConfigRaw
	I0419 23:57:46.734243   84284 main.go:141] libmachine: (addons-903502) Calling .GetIP
	I0419 23:57:46.736755   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.737208   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.737237   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.737484   84284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/config.json ...
	I0419 23:57:46.737648   84284 start.go:128] duration metric: took 23.832188673s to createHost
	I0419 23:57:46.737669   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:46.739704   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.740025   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.740061   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.740164   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:46.740326   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.740461   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.740614   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:46.740768   84284 main.go:141] libmachine: Using SSH client type: native
	I0419 23:57:46.740915   84284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0419 23:57:46.740926   84284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 23:57:46.838157   84284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713571066.808343499
	
	I0419 23:57:46.838180   84284 fix.go:216] guest clock: 1713571066.808343499
	I0419 23:57:46.838189   84284 fix.go:229] Guest: 2024-04-19 23:57:46.808343499 +0000 UTC Remote: 2024-04-19 23:57:46.737658804 +0000 UTC m=+23.944466384 (delta=70.684695ms)
	I0419 23:57:46.838239   84284 fix.go:200] guest clock delta is within tolerance: 70.684695ms
	I0419 23:57:46.838256   84284 start.go:83] releasing machines lock for "addons-903502", held for 23.932863676s
	I0419 23:57:46.838281   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:57:46.838551   84284 main.go:141] libmachine: (addons-903502) Calling .GetIP
	I0419 23:57:46.841103   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.841430   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.841461   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.841601   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:57:46.842079   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:57:46.842255   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:57:46.842385   84284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 23:57:46.842431   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:46.842476   84284 ssh_runner.go:195] Run: cat /version.json
	I0419 23:57:46.842499   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:46.845045   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.845410   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.845442   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.845556   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:46.845618   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.845741   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.845894   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:46.845968   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.845992   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.846071   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:57:46.846118   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:46.846266   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.846442   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:46.846586   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:57:46.918231   84284 ssh_runner.go:195] Run: systemctl --version
	I0419 23:57:46.977113   84284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 23:57:47.141207   84284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 23:57:47.148596   84284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 23:57:47.148654   84284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 23:57:47.166822   84284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 23:57:47.166847   84284 start.go:494] detecting cgroup driver to use...
	I0419 23:57:47.166896   84284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 23:57:47.187854   84284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 23:57:47.206086   84284 docker.go:217] disabling cri-docker service (if available) ...
	I0419 23:57:47.206129   84284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 23:57:47.223019   84284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 23:57:47.238003   84284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 23:57:47.362643   84284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 23:57:47.494115   84284 docker.go:233] disabling docker service ...
	I0419 23:57:47.494193   84284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 23:57:47.508909   84284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 23:57:47.523235   84284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 23:57:47.666908   84284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 23:57:47.788905   84284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 23:57:47.804092   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 23:57:47.824973   84284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 23:57:47.825038   84284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 23:57:47.836688   84284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 23:57:47.836746   84284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 23:57:47.848601   84284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 23:57:47.862956   84284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 23:57:47.877355   84284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 23:57:47.892878   84284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 23:57:47.907307   84284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 23:57:47.928116   84284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 23:57:47.942287   84284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 23:57:47.955256   84284 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0419 23:57:47.955308   84284 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0419 23:57:47.973724   84284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 23:57:47.996587   84284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 23:57:48.134712   84284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 23:57:48.280505   84284 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 23:57:48.280591   84284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 23:57:48.286116   84284 start.go:562] Will wait 60s for crictl version
	I0419 23:57:48.286189   84284 ssh_runner.go:195] Run: which crictl
	I0419 23:57:48.290183   84284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 23:57:48.328749   84284 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 23:57:48.328862   84284 ssh_runner.go:195] Run: crio --version
	I0419 23:57:48.358701   84284 ssh_runner.go:195] Run: crio --version
	I0419 23:57:48.389622   84284 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 23:57:48.390919   84284 main.go:141] libmachine: (addons-903502) Calling .GetIP
	I0419 23:57:48.393703   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:48.394091   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:48.394114   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:48.394320   84284 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0419 23:57:48.398975   84284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 23:57:48.414988   84284 kubeadm.go:877] updating cluster {Name:addons-903502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-903502 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 23:57:48.415118   84284 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 23:57:48.415165   84284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 23:57:48.453153   84284 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0419 23:57:48.453242   84284 ssh_runner.go:195] Run: which lz4
	I0419 23:57:48.457649   84284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0419 23:57:48.462374   84284 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0419 23:57:48.462396   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0419 23:57:50.034281   84284 crio.go:462] duration metric: took 1.576677218s to copy over tarball
	I0419 23:57:50.034368   84284 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0419 23:57:52.494020   84284 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.459608363s)
	I0419 23:57:52.494053   84284 crio.go:469] duration metric: took 2.459736027s to extract the tarball
	I0419 23:57:52.494067   84284 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0419 23:57:52.533083   84284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 23:57:52.576970   84284 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 23:57:52.576998   84284 cache_images.go:84] Images are preloaded, skipping loading
	I0419 23:57:52.577014   84284 kubeadm.go:928] updating node { 192.168.39.36 8443 v1.30.0 crio true true} ...
	I0419 23:57:52.577180   84284 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-903502 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-903502 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 23:57:52.577249   84284 ssh_runner.go:195] Run: crio config
	I0419 23:57:52.623878   84284 cni.go:84] Creating CNI manager for ""
	I0419 23:57:52.623904   84284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 23:57:52.623920   84284 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 23:57:52.623942   84284 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-903502 NodeName:addons-903502 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 23:57:52.624098   84284 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-903502"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 23:57:52.624160   84284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 23:57:52.634902   84284 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 23:57:52.634963   84284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0419 23:57:52.645324   84284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0419 23:57:52.664775   84284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 23:57:52.682779   84284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0419 23:57:52.701117   84284 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I0419 23:57:52.705838   84284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 23:57:52.719762   84284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 23:57:52.842826   84284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 23:57:52.861046   84284 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502 for IP: 192.168.39.36
	I0419 23:57:52.861073   84284 certs.go:194] generating shared ca certs ...
	I0419 23:57:52.861093   84284 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:52.861257   84284 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0419 23:57:52.968483   84284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt ...
	I0419 23:57:52.968510   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt: {Name:mk3e7941e28c54cac53c4989f2f18b35b315eb8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:52.968667   84284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key ...
	I0419 23:57:52.968678   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key: {Name:mk55446d928fb96f6a08651efbd5210423732b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:52.968748   84284 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0419 23:57:53.281329   84284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt ...
	I0419 23:57:53.281364   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt: {Name:mk01bba80ee303a40e5842406ec49b102a0f4de3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:53.281541   84284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key ...
	I0419 23:57:53.281556   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key: {Name:mk685d994480bdb16a26b8b4354904f3d219044d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:53.281664   84284 certs.go:256] generating profile certs ...
	I0419 23:57:53.281724   84284 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.key
	I0419 23:57:53.281741   84284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt with IP's: []
	I0419 23:57:53.466222   84284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt ...
	I0419 23:57:53.466258   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: {Name:mkf4d8cb8884cf8b66721cc1da8dcafd60ee33d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:53.466422   84284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.key ...
	I0419 23:57:53.466433   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.key: {Name:mk9dfadc37c3da14550d2574628348d276523fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:53.466501   84284 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.key.bbc991ce
	I0419 23:57:53.466518   84284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.crt.bbc991ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.36]
	I0419 23:57:53.547735   84284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.crt.bbc991ce ...
	I0419 23:57:53.547770   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.crt.bbc991ce: {Name:mkc1ce65e23fe7bc0321991d4aa57384f5061964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:53.547931   84284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.key.bbc991ce ...
	I0419 23:57:53.547945   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.key.bbc991ce: {Name:mkf41bb88da9613c623f61d1bd00af8cbb18fa53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:53.548012   84284 certs.go:381] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.crt.bbc991ce -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.crt
	I0419 23:57:53.548106   84284 certs.go:385] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.key.bbc991ce -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.key
	I0419 23:57:53.548158   84284 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/proxy-client.key
	I0419 23:57:53.548176   84284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/proxy-client.crt with IP's: []
	I0419 23:57:53.629671   84284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/proxy-client.crt ...
	I0419 23:57:53.629711   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/proxy-client.crt: {Name:mk44855f25552262df405e97b9728f6df6a04fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:53.629902   84284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/proxy-client.key ...
	I0419 23:57:53.629916   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/proxy-client.key: {Name:mkbda7245c6475c666ba0dd184a96313960f3bb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:53.630116   84284 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0419 23:57:53.630155   84284 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0419 23:57:53.630181   84284 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0419 23:57:53.630208   84284 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0419 23:57:53.630848   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 23:57:53.664786   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0419 23:57:53.713247   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 23:57:53.742720   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 23:57:53.771611   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0419 23:57:53.799826   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0419 23:57:53.828725   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 23:57:53.856838   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0419 23:57:53.885159   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 23:57:53.913809   84284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0419 23:57:53.933377   84284 ssh_runner.go:195] Run: openssl version
	I0419 23:57:53.940096   84284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 23:57:53.952818   84284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 23:57:53.958299   84284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0419 23:57:53.958381   84284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 23:57:53.965328   84284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 23:57:53.978050   84284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 23:57:53.982952   84284 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 23:57:53.983040   84284 kubeadm.go:391] StartCluster: {Name:addons-903502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-903502 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 23:57:53.983129   84284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0419 23:57:53.983211   84284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0419 23:57:54.027113   84284 cri.go:89] found id: ""
	I0419 23:57:54.027187   84284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0419 23:57:54.038615   84284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 23:57:54.049532   84284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 23:57:54.060740   84284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 23:57:54.060761   84284 kubeadm.go:156] found existing configuration files:
	
	I0419 23:57:54.060818   84284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0419 23:57:54.071603   84284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 23:57:54.071671   84284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 23:57:54.084321   84284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0419 23:57:54.095721   84284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 23:57:54.095779   84284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 23:57:54.105780   84284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0419 23:57:54.116316   84284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 23:57:54.116374   84284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 23:57:54.126569   84284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0419 23:57:54.136431   84284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 23:57:54.136489   84284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 23:57:54.146717   84284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0419 23:57:54.204386   84284 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0419 23:57:54.204473   84284 kubeadm.go:309] [preflight] Running pre-flight checks
	I0419 23:57:54.346746   84284 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0419 23:57:54.346938   84284 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0419 23:57:54.347112   84284 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0419 23:57:54.613352   84284 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 23:57:54.713349   84284 out.go:204]   - Generating certificates and keys ...
	I0419 23:57:54.713488   84284 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0419 23:57:54.713605   84284 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0419 23:57:54.748316   84284 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0419 23:57:54.796222   84284 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0419 23:57:54.872233   84284 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0419 23:57:54.997599   84284 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0419 23:57:55.288948   84284 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0419 23:57:55.290846   84284 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-903502 localhost] and IPs [192.168.39.36 127.0.0.1 ::1]
	I0419 23:57:55.426320   84284 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0419 23:57:55.426627   84284 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-903502 localhost] and IPs [192.168.39.36 127.0.0.1 ::1]
	I0419 23:57:55.585716   84284 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0419 23:57:55.842262   84284 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0419 23:57:56.063757   84284 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0419 23:57:56.063858   84284 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 23:57:56.230739   84284 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0419 23:57:56.297623   84284 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0419 23:57:56.379194   84284 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0419 23:57:56.545827   84284 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 23:57:56.725283   84284 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 23:57:56.725989   84284 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 23:57:56.728397   84284 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 23:57:56.730747   84284 out.go:204]   - Booting up control plane ...
	I0419 23:57:56.730858   84284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 23:57:56.730946   84284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 23:57:56.732254   84284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 23:57:56.748507   84284 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 23:57:56.749085   84284 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 23:57:56.749156   84284 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0419 23:57:56.882141   84284 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0419 23:57:56.882266   84284 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0419 23:57:57.884238   84284 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.003081997s
	I0419 23:57:57.884317   84284 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0419 23:58:02.383206   84284 kubeadm.go:309] [api-check] The API server is healthy after 4.501707364s
	I0419 23:58:02.396410   84284 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0419 23:58:02.413687   84284 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0419 23:58:02.443398   84284 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0419 23:58:02.443588   84284 kubeadm.go:309] [mark-control-plane] Marking the node addons-903502 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0419 23:58:02.459024   84284 kubeadm.go:309] [bootstrap-token] Using token: xhm5bp.g0g44g1zazvpep10
	I0419 23:58:02.460719   84284 out.go:204]   - Configuring RBAC rules ...
	I0419 23:58:02.460874   84284 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0419 23:58:02.464615   84284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0419 23:58:02.474749   84284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0419 23:58:02.477727   84284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0419 23:58:02.480906   84284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0419 23:58:02.484362   84284 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0419 23:58:02.790036   84284 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0419 23:58:03.227734   84284 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0419 23:58:03.789998   84284 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0419 23:58:03.790023   84284 kubeadm.go:309] 
	I0419 23:58:03.790095   84284 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0419 23:58:03.790107   84284 kubeadm.go:309] 
	I0419 23:58:03.790191   84284 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0419 23:58:03.790235   84284 kubeadm.go:309] 
	I0419 23:58:03.790298   84284 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0419 23:58:03.790385   84284 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0419 23:58:03.790435   84284 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0419 23:58:03.790442   84284 kubeadm.go:309] 
	I0419 23:58:03.790509   84284 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0419 23:58:03.790522   84284 kubeadm.go:309] 
	I0419 23:58:03.790585   84284 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0419 23:58:03.790597   84284 kubeadm.go:309] 
	I0419 23:58:03.790674   84284 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0419 23:58:03.790780   84284 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0419 23:58:03.790875   84284 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0419 23:58:03.790884   84284 kubeadm.go:309] 
	I0419 23:58:03.790956   84284 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0419 23:58:03.791064   84284 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0419 23:58:03.791077   84284 kubeadm.go:309] 
	I0419 23:58:03.791217   84284 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token xhm5bp.g0g44g1zazvpep10 \
	I0419 23:58:03.791386   84284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0419 23:58:03.791427   84284 kubeadm.go:309] 	--control-plane 
	I0419 23:58:03.791442   84284 kubeadm.go:309] 
	I0419 23:58:03.791547   84284 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0419 23:58:03.791566   84284 kubeadm.go:309] 
	I0419 23:58:03.791678   84284 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token xhm5bp.g0g44g1zazvpep10 \
	I0419 23:58:03.791845   84284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0419 23:58:03.792028   84284 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 23:58:03.792055   84284 cni.go:84] Creating CNI manager for ""
	I0419 23:58:03.792068   84284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 23:58:03.793858   84284 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0419 23:58:03.795183   84284 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0419 23:58:03.810534   84284 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0419 23:58:03.830216   84284 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0419 23:58:03.830313   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:03.830367   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-903502 minikube.k8s.io/updated_at=2024_04_19T23_58_03_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=addons-903502 minikube.k8s.io/primary=true
	I0419 23:58:03.857965   84284 ops.go:34] apiserver oom_adj: -16
	I0419 23:58:03.933556   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:04.434036   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:04.934206   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:05.434382   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:05.933664   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:06.434564   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:06.933679   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:07.433774   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:07.933843   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:08.434389   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:08.933787   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:09.434361   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:09.933585   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:10.433742   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:10.933513   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:11.433944   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:11.934271   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:12.433788   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:12.933658   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:13.433940   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:13.933567   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:14.434608   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:14.934412   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:15.434412   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:15.934310   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:16.434606   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:16.934279   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:17.053917   84284 kubeadm.go:1107] duration metric: took 13.223679997s to wait for elevateKubeSystemPrivileges
	W0419 23:58:17.053999   84284 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0419 23:58:17.054013   84284 kubeadm.go:393] duration metric: took 23.070981538s to StartCluster
	I0419 23:58:17.054039   84284 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:58:17.054190   84284 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0419 23:58:17.054590   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:58:17.054822   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0419 23:58:17.054890   84284 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 23:58:17.056958   84284 out.go:177] * Verifying Kubernetes components...
	I0419 23:58:17.055021   84284 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0419 23:58:17.055107   84284 config.go:182] Loaded profile config "addons-903502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 23:58:17.058315   84284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 23:58:17.058328   84284 addons.go:69] Setting cloud-spanner=true in profile "addons-903502"
	I0419 23:58:17.058344   84284 addons.go:69] Setting yakd=true in profile "addons-903502"
	I0419 23:58:17.058350   84284 addons.go:69] Setting gcp-auth=true in profile "addons-903502"
	I0419 23:58:17.058380   84284 addons.go:69] Setting default-storageclass=true in profile "addons-903502"
	I0419 23:58:17.058399   84284 addons.go:69] Setting ingress-dns=true in profile "addons-903502"
	I0419 23:58:17.058404   84284 addons.go:234] Setting addon yakd=true in "addons-903502"
	I0419 23:58:17.058405   84284 addons.go:69] Setting metrics-server=true in profile "addons-903502"
	I0419 23:58:17.058415   84284 addons.go:69] Setting storage-provisioner=true in profile "addons-903502"
	I0419 23:58:17.058421   84284 mustload.go:65] Loading cluster: addons-903502
	I0419 23:58:17.058400   84284 addons.go:69] Setting inspektor-gadget=true in profile "addons-903502"
	I0419 23:58:17.058430   84284 addons.go:234] Setting addon metrics-server=true in "addons-903502"
	I0419 23:58:17.058433   84284 addons.go:234] Setting addon storage-provisioner=true in "addons-903502"
	I0419 23:58:17.058437   84284 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-903502"
	I0419 23:58:17.058448   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.058454   84284 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-903502"
	I0419 23:58:17.058454   84284 addons.go:69] Setting ingress=true in profile "addons-903502"
	I0419 23:58:17.058464   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.058474   84284 addons.go:234] Setting addon ingress=true in "addons-903502"
	I0419 23:58:17.058492   84284 addons.go:69] Setting registry=true in profile "addons-903502"
	I0419 23:58:17.058508   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.058511   84284 addons.go:234] Setting addon registry=true in "addons-903502"
	I0419 23:58:17.058531   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.058602   84284 config.go:182] Loaded profile config "addons-903502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 23:58:17.058386   84284 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-903502"
	I0419 23:58:17.058915   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.058924   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.058928   84284 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-903502"
	I0419 23:58:17.058344   84284 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-903502"
	I0419 23:58:17.058941   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.058945   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.058464   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.058963   84284 addons.go:69] Setting volumesnapshots=true in profile "addons-903502"
	I0419 23:58:17.058969   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.058972   84284 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-903502"
	I0419 23:58:17.058979   84284 addons.go:234] Setting addon volumesnapshots=true in "addons-903502"
	I0419 23:58:17.058988   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.058930   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059008   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.058947   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059011   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059024   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.059037   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.058404   84284 addons.go:234] Setting addon cloud-spanner=true in "addons-903502"
	I0419 23:58:17.058449   84284 addons.go:234] Setting addon inspektor-gadget=true in "addons-903502"
	I0419 23:58:17.058335   84284 addons.go:69] Setting helm-tiller=true in profile "addons-903502"
	I0419 23:58:17.059055   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.059072   84284 addons.go:234] Setting addon helm-tiller=true in "addons-903502"
	I0419 23:58:17.058425   84284 addons.go:234] Setting addon ingress-dns=true in "addons-903502"
	I0419 23:58:17.058953   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.059098   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.059264   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059283   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.058421   84284 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-903502"
	I0419 23:58:17.059379   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059405   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.059429   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059455   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.059468   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.059495   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.059594   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059615   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.059628   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.059678   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059711   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.059779   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.059863   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059893   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.059907   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059928   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.079704   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38715
	I0419 23:58:17.080258   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.080928   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.080968   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.081587   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.081665   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35387
	I0419 23:58:17.081763   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.081807   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.081802   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.081850   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.082119   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.082310   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.082356   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.082933   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.082966   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.083310   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.083863   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.083893   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.094580   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0419 23:58:17.094616   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37935
	I0419 23:58:17.094886   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I0419 23:58:17.095159   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.095276   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.095642   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.095663   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.095810   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.095822   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.096026   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.096565   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.096605   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.096864   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.097488   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.097527   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.101688   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.105916   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.105938   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.106366   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.106957   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.106999   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.111463   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35193
	I0419 23:58:17.111673   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41977
	I0419 23:58:17.112000   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.113368   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I0419 23:58:17.113882   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.114445   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.114466   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.114888   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.115495   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.115532   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.115977   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40367
	I0419 23:58:17.116441   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.117003   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.117021   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.117426   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.117617   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.119625   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.120386   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.120407   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.120886   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.121641   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.121692   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.122030   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36375
	I0419 23:58:17.122037   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.122054   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.122472   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.122472   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.122735   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.122932   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.123069   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.123087   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.123551   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.123840   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.123924   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.124015   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.125851   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I0419 23:58:17.126590   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.126781   84284 addons.go:234] Setting addon default-storageclass=true in "addons-903502"
	I0419 23:58:17.126831   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.127182   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.127213   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.127299   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.127315   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.127657   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.127832   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.128114   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I0419 23:58:17.128777   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.129296   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.129326   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.129487   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.129669   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.129721   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.132295   84284 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0419 23:58:17.130321   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.130385   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42511
	I0419 23:58:17.131123   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I0419 23:58:17.131304   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I0419 23:58:17.132192   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39633
	I0419 23:58:17.133996   84284 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0419 23:58:17.134380   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.135362   84284 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0419 23:58:17.135676   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0419 23:58:17.137334   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.137349   84284 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0419 23:58:17.137364   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0419 23:58:17.137381   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.136133   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.137407   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.136341   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.136348   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.136384   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.138000   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.138022   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.138031   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.138238   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.138751   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.138769   84284 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-903502"
	I0419 23:58:17.138821   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.139064   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.139082   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.139210   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.139247   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.139506   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.139555   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.139920   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.140001   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.141850   84284 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0419 23:58:17.140683   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.140838   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.142270   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.143139   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.143178   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.144808   84284 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0419 23:58:17.143376   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.142978   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.143708   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I0419 23:58:17.145103   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.147317   84284 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0419 23:58:17.148778   84284 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 23:58:17.147237   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.147414   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.146992   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I0419 23:58:17.147432   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.147650   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38801
	I0419 23:58:17.148537   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36935
	I0419 23:58:17.148926   84284 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0419 23:58:17.150075   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0419 23:58:17.150098   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.150236   84284 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 23:58:17.150245   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0419 23:58:17.150260   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.151122   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.151466   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.151554   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.151625   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.151661   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.151703   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.151723   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.151976   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.152000   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.151999   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.152322   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.152379   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.152614   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.152637   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.152651   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.152753   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.153502   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.153514   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45467
	I0419 23:58:17.153546   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.153611   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.153627   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.153661   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.153694   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.153835   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.153987   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.154117   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.154184   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.154206   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.154264   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.154278   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.154345   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.154359   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.154662   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.154849   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.155229   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.155250   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.155554   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.155729   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.155907   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.158270   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.158285   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.158302   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.160222   84284 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0419 23:58:17.158740   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.158824   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.158947   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.159026   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.161558   84284 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0419 23:58:17.161568   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0419 23:58:17.161585   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.161625   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.161643   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.162344   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.162408   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.162506   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.162599   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.162780   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.163041   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.166151   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35745
	I0419 23:58:17.166774   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.167347   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.167364   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.168286   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.168327   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.168908   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.168946   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.169368   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.169401   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.169574   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.169831   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.170009   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.170150   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.176972   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
	I0419 23:58:17.177277   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34457
	I0419 23:58:17.177685   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.177786   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.178171   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.178191   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.178329   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.178347   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.178556   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.178739   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.178928   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.179718   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.179751   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.180646   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.180712   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36007
	I0419 23:58:17.182462   84284 out.go:177]   - Using image docker.io/registry:2.8.3
	I0419 23:58:17.181493   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.182268   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33611
	I0419 23:58:17.183031   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43231
	I0419 23:58:17.185398   84284 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0419 23:58:17.184401   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.184428   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.184477   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.185512   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41773
	I0419 23:58:17.185920   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46479
	I0419 23:58:17.186747   84284 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0419 23:58:17.186765   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0419 23:58:17.186792   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.186840   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.188206   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.188224   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.188302   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.188430   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.188442   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.188811   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.188843   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.188811   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.189036   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.189103   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.189688   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.190240   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.190468   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.190490   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.190875   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.191028   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.191048   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.191154   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.191222   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.191395   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.191565   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.193379   84284 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0419 23:58:17.192458   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44425
	I0419 23:58:17.192684   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42947
	I0419 23:58:17.193203   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.193862   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.194615   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.193907   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.193919   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.194685   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.194706   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.194416   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.194829   84284 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0419 23:58:17.194841   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0419 23:58:17.194857   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.196296   84284 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0419 23:58:17.194954   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.195221   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.196028   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.198027   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.198073   84284 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0419 23:58:17.198209   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.198523   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.198703   84284 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0419 23:58:17.198711   84284 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0419 23:58:17.199327   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.199961   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.200006   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0419 23:58:17.200031   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.200081   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0419 23:58:17.201664   84284 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0419 23:58:17.201691   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0419 23:58:17.201710   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.203274   84284 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0419 23:58:17.203298   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0419 23:58:17.203316   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.200405   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.200439   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.200461   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.201067   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.203411   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.200134   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.203452   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.203459   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.202840   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.203490   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.203141   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0419 23:58:17.203509   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.204437   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.205175   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0419 23:58:17.204445   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.206280   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.204475   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.204881   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.205291   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.205337   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.205363   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.206458   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.207478   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.207488   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0419 23:58:17.207512   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.207611   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.207886   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.208692   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0419 23:58:17.208813   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.208492   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36129
	I0419 23:58:17.208865   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.207901   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.209021   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.210001   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0419 23:58:17.209078   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.209646   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.210192   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.210879   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.212462   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0419 23:58:17.211566   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.211607   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.211625   84284 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0419 23:58:17.211888   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.213723   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.213787   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0419 23:58:17.215077   84284 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0419 23:58:17.215094   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0419 23:58:17.215108   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.213922   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0419 23:58:17.215173   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.216606   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0419 23:58:17.214103   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.214113   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.217881   84284 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0419 23:58:17.217897   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0419 23:58:17.217912   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.218103   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.218235   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.218607   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.218628   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.218708   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.219406   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.219656   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.219953   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.219958   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.220003   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.220107   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.220279   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.220293   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.220507   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.220686   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.220895   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.220982   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.222868   84284 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0419 23:58:17.221845   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.222898   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.222911   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.222383   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.223101   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.224424   84284 out.go:177]   - Using image docker.io/busybox:stable
	I0419 23:58:17.225725   84284 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0419 23:58:17.225739   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0419 23:58:17.225751   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.224594   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.226235   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	W0419 23:58:17.226948   84284 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54470->192.168.39.36:22: read: connection reset by peer
	I0419 23:58:17.226974   84284 retry.go:31] will retry after 297.279332ms: ssh: handshake failed: read tcp 192.168.39.1:54470->192.168.39.36:22: read: connection reset by peer
	I0419 23:58:17.228148   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.228413   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.228430   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.228578   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.228722   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.228824   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.228922   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	W0419 23:58:17.234477   84284 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54478->192.168.39.36:22: read: connection reset by peer
	I0419 23:58:17.234495   84284 retry.go:31] will retry after 179.667439ms: ssh: handshake failed: read tcp 192.168.39.1:54478->192.168.39.36:22: read: connection reset by peer
	I0419 23:58:17.656047   84284 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0419 23:58:17.656082   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0419 23:58:17.660452   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0419 23:58:17.698158   84284 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0419 23:58:17.698188   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0419 23:58:17.706586   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0419 23:58:17.752530   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0419 23:58:17.765867   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 23:58:17.769214   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0419 23:58:17.781458   84284 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0419 23:58:17.781479   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0419 23:58:17.809621   84284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0419 23:58:17.809650   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0419 23:58:17.850530   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0419 23:58:17.869545   84284 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0419 23:58:17.869584   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0419 23:58:17.904122   84284 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0419 23:58:17.904145   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0419 23:58:17.908626   84284 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0419 23:58:17.908653   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0419 23:58:17.927918   84284 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0419 23:58:17.927942   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0419 23:58:18.039653   84284 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0419 23:58:18.039687   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0419 23:58:18.059068   84284 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0419 23:58:18.059104   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0419 23:58:18.081605   84284 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0419 23:58:18.081641   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0419 23:58:18.108763   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0419 23:58:18.132297   84284 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0419 23:58:18.132332   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0419 23:58:18.150127   84284 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0419 23:58:18.150152   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0419 23:58:18.204200   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0419 23:58:18.227605   84284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0419 23:58:18.227640   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0419 23:58:18.238396   84284 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0419 23:58:18.238428   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0419 23:58:18.264758   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0419 23:58:18.270051   84284 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0419 23:58:18.270072   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0419 23:58:18.341132   84284 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0419 23:58:18.341159   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0419 23:58:18.346362   84284 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0419 23:58:18.346402   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0419 23:58:18.387851   84284 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.33299283s)
	I0419 23:58:18.387933   84284 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.329587298s)
	I0419 23:58:18.388001   84284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 23:58:18.388022   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0419 23:58:18.445983   84284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0419 23:58:18.446010   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0419 23:58:18.448078   84284 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0419 23:58:18.448094   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0419 23:58:18.523488   84284 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0419 23:58:18.523516   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0419 23:58:18.603546   84284 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0419 23:58:18.603573   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0419 23:58:18.617713   84284 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0419 23:58:18.617735   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0419 23:58:18.809927   84284 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0419 23:58:18.809952   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0419 23:58:18.951697   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0419 23:58:18.961685   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0419 23:58:18.966655   84284 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0419 23:58:18.966680   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0419 23:58:19.042624   84284 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0419 23:58:19.042657   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0419 23:58:19.268273   84284 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0419 23:58:19.268301   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0419 23:58:19.432233   84284 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0419 23:58:19.432261   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0419 23:58:19.443919   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0419 23:58:19.735435   84284 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0419 23:58:19.735461   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0419 23:58:19.784294   84284 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0419 23:58:19.784328   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0419 23:58:20.023070   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0419 23:58:20.079344   84284 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0419 23:58:20.079376   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0419 23:58:20.506716   84284 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0419 23:58:20.506742   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0419 23:58:21.005879   84284 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0419 23:58:21.005907   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0419 23:58:21.646602   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0419 23:58:21.971302   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.264674032s)
	I0419 23:58:21.971375   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:21.971392   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:21.971773   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:21.971835   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:21.971855   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:21.971868   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:21.971867   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:21.972065   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.311582701s)
	I0419 23:58:21.972101   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:21.972206   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:21.972125   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:21.972181   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:21.972261   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:21.972466   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:21.972476   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:21.972527   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:21.972542   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:21.972566   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:21.972774   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:21.972849   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:21.972861   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:22.031290   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:22.031312   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:22.031603   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:22.031650   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:24.235486   84284 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0419 23:58:24.235534   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:24.238287   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:24.238746   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:24.238778   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:24.238959   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:24.239174   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:24.239335   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:24.239505   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:25.063304   84284 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0419 23:58:25.379287   84284 addons.go:234] Setting addon gcp-auth=true in "addons-903502"
	I0419 23:58:25.379351   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:25.379704   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:25.379739   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:25.394724   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42965
	I0419 23:58:25.395123   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:25.395691   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:25.395722   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:25.396082   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:25.396657   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:25.396701   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:25.444717   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I0419 23:58:25.445159   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:25.445722   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:25.445745   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:25.446272   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:25.446540   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:25.448393   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:25.448660   84284 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0419 23:58:25.448692   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:25.451410   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:25.451830   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:25.451859   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:25.451966   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:25.452148   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:25.452312   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:25.452441   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:26.758236   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.992328895s)
	I0419 23:58:26.758303   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758317   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758354   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.989107327s)
	I0419 23:58:26.758396   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758412   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758423   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.005854567s)
	I0419 23:58:26.758448   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.649652744s)
	I0419 23:58:26.758424   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.907826902s)
	I0419 23:58:26.758472   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758453   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758483   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758486   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758511   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.554281271s)
	I0419 23:58:26.758473   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758531   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758533   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758541   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758573   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.493771789s)
	I0419 23:58:26.758590   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758601   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.758602   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758614   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.758625   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.758632   84284 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.370613804s)
	I0419 23:58:26.758636   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758645   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758652   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.758653   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.758660   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.758669   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758682   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758729   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.758752   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.758760   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.758768   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758775   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758803   84284 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.370760805s)
	I0419 23:58:26.758840   84284 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0419 23:58:26.758911   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.807182396s)
	I0419 23:58:26.758936   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758949   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.759051   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.797335839s)
	I0419 23:58:26.759069   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.759078   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.759203   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.315247755s)
	W0419 23:58:26.759234   84284 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0419 23:58:26.759257   84284 retry.go:31] will retry after 186.491444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0419 23:58:26.759340   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.736238377s)
	I0419 23:58:26.759358   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.759368   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.759427   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.759443   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.759464   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.759471   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.759478   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.759485   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.759531   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.759539   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.759547   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.759553   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.759590   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.759609   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.759616   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.759624   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.759630   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.759665   84284 node_ready.go:35] waiting up to 6m0s for node "addons-903502" to be "Ready" ...
	I0419 23:58:26.759708   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.759732   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.759739   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.759750   84284 addons.go:470] Verifying addon ingress=true in "addons-903502"
	I0419 23:58:26.765061   84284 out.go:177] * Verifying ingress addon...
	I0419 23:58:26.759849   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.759874   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766431   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.759889   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.759907   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766490   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.766496   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.760098   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.760123   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766536   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.760140   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.760156   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766601   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.760350   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766619   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.766629   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.766630   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.761362   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.761390   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766703   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.761405   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.761420   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766791   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.766802   84284 addons.go:470] Verifying addon metrics-server=true in "addons-903502"
	I0419 23:58:26.761922   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766861   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.766870   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.761945   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.766879   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.764956   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.764972   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766930   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.766939   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.766946   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.766962   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766502   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.766975   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.766986   84284 addons.go:470] Verifying addon registry=true in "addons-903502"
	I0419 23:58:26.760375   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.769072   84284 out.go:177] * Verifying registry addon...
	I0419 23:58:26.767060   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.767075   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.767251   84284 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0419 23:58:26.767268   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.767285   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.767456   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.767489   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.770251   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.770283   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.770293   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.771499   84284 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-903502 service yakd-dashboard -n yakd-dashboard
	
	I0419 23:58:26.770940   84284 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0419 23:58:26.784581   84284 node_ready.go:49] node "addons-903502" has status "Ready":"True"
	I0419 23:58:26.784599   84284 node_ready.go:38] duration metric: took 24.91739ms for node "addons-903502" to be "Ready" ...
	I0419 23:58:26.784607   84284 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 23:58:26.787562   84284 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0419 23:58:26.787584   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:26.799952   84284 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0419 23:58:26.799976   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:26.823324   84284 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2dd9g" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:26.840790   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.840809   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.841113   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.841130   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.841169   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.858056   84284 pod_ready.go:92] pod "coredns-7db6d8ff4d-2dd9g" in "kube-system" namespace has status "Ready":"True"
	I0419 23:58:26.858090   84284 pod_ready.go:81] duration metric: took 34.73522ms for pod "coredns-7db6d8ff4d-2dd9g" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:26.858104   84284 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tjjdl" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:26.891970   84284 pod_ready.go:92] pod "coredns-7db6d8ff4d-tjjdl" in "kube-system" namespace has status "Ready":"True"
	I0419 23:58:26.891999   84284 pod_ready.go:81] duration metric: took 33.886562ms for pod "coredns-7db6d8ff4d-tjjdl" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:26.892012   84284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-903502" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:26.946839   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0419 23:58:26.989913   84284 pod_ready.go:92] pod "etcd-addons-903502" in "kube-system" namespace has status "Ready":"True"
	I0419 23:58:26.989937   84284 pod_ready.go:81] duration metric: took 97.916362ms for pod "etcd-addons-903502" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:26.989949   84284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-903502" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:27.022682   84284 pod_ready.go:92] pod "kube-apiserver-addons-903502" in "kube-system" namespace has status "Ready":"True"
	I0419 23:58:27.022705   84284 pod_ready.go:81] duration metric: took 32.748664ms for pod "kube-apiserver-addons-903502" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:27.022717   84284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-903502" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:27.166905   84284 pod_ready.go:92] pod "kube-controller-manager-addons-903502" in "kube-system" namespace has status "Ready":"True"
	I0419 23:58:27.166930   84284 pod_ready.go:81] duration metric: took 144.204537ms for pod "kube-controller-manager-addons-903502" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:27.166945   84284 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v7nxm" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:27.270455   84284 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-903502" context rescaled to 1 replicas
	I0419 23:58:27.274941   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:27.278436   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:27.563847   84284 pod_ready.go:92] pod "kube-proxy-v7nxm" in "kube-system" namespace has status "Ready":"True"
	I0419 23:58:27.563871   84284 pod_ready.go:81] duration metric: took 396.91828ms for pod "kube-proxy-v7nxm" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:27.563883   84284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-903502" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:27.786804   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:27.790094   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:27.964301   84284 pod_ready.go:92] pod "kube-scheduler-addons-903502" in "kube-system" namespace has status "Ready":"True"
	I0419 23:58:27.964322   84284 pod_ready.go:81] duration metric: took 400.430953ms for pod "kube-scheduler-addons-903502" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:27.964332   84284 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:28.319931   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:28.323768   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:28.439685   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.793017842s)
	I0419 23:58:28.439748   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:28.439763   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:28.439771   84284 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.991083742s)
	I0419 23:58:28.441011   84284 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0419 23:58:28.440070   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:28.440096   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:28.443182   84284 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0419 23:58:28.442083   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:28.444371   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:28.444382   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:28.444437   84284 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0419 23:58:28.444461   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0419 23:58:28.444655   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:28.444718   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:28.444733   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:28.444744   84284 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-903502"
	I0419 23:58:28.446013   84284 out.go:177] * Verifying csi-hostpath-driver addon...
	I0419 23:58:28.447829   84284 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0419 23:58:28.483817   84284 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0419 23:58:28.483840   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0419 23:58:28.534693   84284 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0419 23:58:28.534719   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:28.639829   84284 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0419 23:58:28.639864   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0419 23:58:28.777678   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:28.777922   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:28.802522   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0419 23:58:28.954009   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:29.275825   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:29.277689   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:29.416743   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.469836575s)
	I0419 23:58:29.416805   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:29.416818   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:29.417160   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:29.417200   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:29.417218   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:29.417239   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:29.417251   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:29.417572   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:29.417591   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:29.417621   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:29.478690   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:29.777912   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:29.780177   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:29.957136   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:29.972034   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:30.300479   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:30.307465   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:30.372590   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.570026582s)
	I0419 23:58:30.372710   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:30.372725   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:30.373161   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:30.373180   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:30.373190   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:30.373198   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:30.373510   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:30.373566   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:30.373586   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:30.375624   84284 addons.go:470] Verifying addon gcp-auth=true in "addons-903502"
	I0419 23:58:30.377247   84284 out.go:177] * Verifying gcp-auth addon...
	I0419 23:58:30.379310   84284 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0419 23:58:30.418268   84284 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0419 23:58:30.418290   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:30.484765   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:30.776316   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:30.784813   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:30.887478   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:30.953663   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:31.279558   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:31.284373   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:31.384500   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:31.453568   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:31.780329   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:31.781619   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:31.884903   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:31.953709   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:32.273866   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:32.276647   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:32.382677   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:32.456358   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:32.471150   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:32.775752   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:32.778757   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:32.884068   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:32.954193   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:33.274753   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:33.278106   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:33.383189   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:33.453125   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:33.775619   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:33.778295   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:33.893036   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:33.953753   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:34.515835   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:34.515869   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:34.516576   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:34.516812   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:34.520404   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:34.775352   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:34.778236   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:34.883688   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:34.953625   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:35.275880   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:35.278646   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:35.386481   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:35.455046   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:35.777112   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:35.777929   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:35.883013   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:35.954686   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:36.276301   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:36.278196   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:36.382977   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:36.457227   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:36.774696   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:36.777177   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:36.883489   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:36.954055   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:36.984075   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:37.274615   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:37.277555   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:37.384027   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:37.454094   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:37.774875   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:37.777447   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:37.884601   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:37.955753   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:38.276372   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:38.279691   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:38.383932   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:38.454142   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:38.774264   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:38.777161   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:38.883282   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:38.953376   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:39.276321   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:39.278315   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:39.383115   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:39.452758   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:39.469520   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:39.775191   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:39.777110   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:39.884170   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:39.952969   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:40.275453   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:40.277419   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:40.383832   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:40.454015   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:40.775047   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:40.778016   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:40.883101   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:40.953668   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:41.275585   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:41.279084   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:41.382740   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:41.453118   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:41.470362   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:41.775177   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:41.778206   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:41.883727   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:41.954577   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:42.309961   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:42.310763   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:42.383993   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:42.453931   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:42.774540   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:42.777005   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:42.883406   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:42.954230   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:43.276499   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:43.277478   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:43.384975   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:43.458697   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:43.472920   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:43.777439   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:43.791039   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:43.890757   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:43.953947   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:44.275621   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:44.278829   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:44.385026   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:44.455048   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:44.778453   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:44.779321   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:44.883744   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:44.955049   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:45.274956   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:45.277579   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:45.384302   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:45.453630   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:45.775545   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:45.777219   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:45.885356   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:45.953685   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:45.971520   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:46.279345   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:46.284210   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:46.383477   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:46.461044   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:46.775015   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:46.777358   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:46.888038   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:46.954412   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:47.354363   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:47.354493   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:47.384242   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:47.453969   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:47.775470   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:47.778045   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:47.884050   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:47.954632   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:47.971853   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:48.275512   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:48.278626   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:48.383644   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:48.454428   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:48.775383   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:48.777598   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:48.882675   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:48.954160   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:49.277978   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:49.280404   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:49.383432   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:49.454217   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:49.776658   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:49.779235   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:49.886213   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:49.954830   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:50.276031   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:50.279234   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:50.383552   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:50.453825   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:50.470029   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:50.775952   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:50.778111   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:50.883451   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:50.954212   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:51.275449   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:51.278138   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:51.384165   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:51.454364   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:51.775683   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:51.784590   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:51.883716   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:51.957357   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:52.275939   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:52.282987   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:52.383057   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:52.456731   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:52.474466   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:52.775876   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:52.778296   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:52.883555   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:52.953479   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:53.276335   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:53.279210   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:53.384294   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:53.453946   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:53.775662   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:53.778324   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:53.883431   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:53.953225   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:54.274930   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:54.277653   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:54.384158   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:54.454114   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:54.775117   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:54.779678   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:54.882773   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:54.956351   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:54.969396   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:55.274499   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:55.278294   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:55.383286   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:55.454284   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:55.775413   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:55.783091   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:55.883262   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:55.953822   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:56.765609   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:56.775033   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:56.777162   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:56.790892   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:56.804450   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:56.804575   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:56.888803   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:56.953575   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:56.971614   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:57.275037   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:57.289849   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:57.387661   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:57.454386   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:57.775026   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:57.784599   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:57.884377   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:57.954852   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:58.276608   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:58.279357   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:58.383169   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:58.453059   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:58.775756   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:58.778038   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:59.180461   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:59.183745   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:59.184428   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:59.275383   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:59.278333   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:59.383899   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:59.454124   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:59.775595   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:59.778370   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:59.884691   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:59.954647   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:00.284108   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:00.290041   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:00.386943   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:00.461218   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:00.775071   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:00.778897   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:00.883574   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:00.960357   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:01.275169   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:01.278648   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:01.383575   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:01.454243   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:01.469834   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:01.776827   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:01.778401   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:01.884087   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:01.954302   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:02.275393   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:02.283347   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:02.386785   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:02.454133   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:02.775132   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:02.780745   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:02.888200   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:02.953593   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:03.281156   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:03.284899   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:03.391455   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:03.458999   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:03.492024   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:03.792222   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:03.792284   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:03.883294   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:03.953197   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:04.376044   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:04.387662   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:04.419129   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:04.557004   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:04.785916   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:04.786393   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:04.883095   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:04.955697   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:05.275312   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:05.277773   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:05.382791   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:05.454176   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:05.775113   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:05.777464   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:05.885380   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:05.953214   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:05.969675   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:06.275332   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:06.279041   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:06.385200   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:06.453685   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:06.774945   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:06.777812   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:06.882994   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:06.957224   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:07.569260   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:07.582296   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:07.584056   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:07.584777   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:07.781008   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:07.784545   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:07.883022   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:07.954162   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:07.969899   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:08.276940   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:08.277898   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:08.383203   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:08.454658   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:08.775669   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:08.778393   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:08.893027   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:08.954218   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:09.285197   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:09.286522   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:09.383990   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:09.454434   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:09.776318   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:09.778798   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:09.883183   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:09.955000   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:09.970535   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:10.275880   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:10.288218   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:10.383235   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:10.454241   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:10.774577   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:10.777125   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:10.883383   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:10.953541   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:11.276798   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:11.278750   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:11.382533   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:11.453511   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:11.775327   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:11.777799   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:11.889902   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:11.953931   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:11.970926   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:12.429273   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:12.430003   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:12.430271   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:12.455069   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:12.775636   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:12.779814   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:12.883456   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:12.954690   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:13.275141   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:13.278479   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:13.383399   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:13.470023   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:13.775126   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:13.777257   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:13.882904   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:13.958582   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:13.972579   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:14.275579   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:14.278094   84284 kapi.go:107] duration metric: took 47.507149601s to wait for kubernetes.io/minikube-addons=registry ...
	I0419 23:59:14.386120   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:14.454534   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:14.775500   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:14.887808   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:14.953750   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:15.274652   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:15.383598   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:15.454017   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:15.777154   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:15.888877   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:15.954120   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:16.277398   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:16.385033   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:16.455082   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:16.475572   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:16.775468   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:16.885244   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:16.956770   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:17.274789   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:17.410048   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:17.455376   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:17.775199   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:17.883508   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:17.958066   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:18.276017   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:18.383507   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:18.455325   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:18.775039   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:18.882986   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:18.953890   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:18.986829   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:19.579639   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:19.580104   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:19.580658   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:19.774591   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:19.886255   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:19.953117   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:20.275507   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:20.383590   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:20.454090   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:20.774805   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:20.883938   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:20.954232   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:21.274525   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:21.384012   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:21.454496   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:21.473675   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:21.774890   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:21.883942   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:21.953590   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:22.275898   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:22.383566   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:22.454222   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:22.775235   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:22.886438   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:22.954738   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:23.274622   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:23.387976   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:23.453988   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:23.776542   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:23.883860   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:23.955174   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:23.971957   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:24.276603   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:24.383510   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:24.455449   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:24.775244   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:24.886926   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:24.971637   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:25.274578   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:25.384170   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:25.454248   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:25.775758   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:25.884447   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:25.953948   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:26.280921   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:26.384059   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:26.454636   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:26.469783   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:26.775669   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:26.884811   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:26.954999   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:27.276314   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:27.385600   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:27.454968   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:27.775213   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:27.895586   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:27.964010   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:28.275257   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:28.395010   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:28.460720   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:28.474516   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:28.775127   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:28.882997   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:28.954778   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:29.274923   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:29.382886   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:29.454243   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:29.778377   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:29.884947   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:29.955834   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:30.275175   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:30.383695   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:30.457942   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:30.776250   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:30.882927   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:30.953908   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:30.970373   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:31.274286   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:31.386100   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:31.457969   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:31.777691   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:31.884327   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:31.953528   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:32.277180   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:32.386774   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:32.454923   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:32.775535   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:32.883221   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:32.954136   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:32.972704   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:33.275646   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:33.383230   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:33.460835   84284 kapi.go:107] duration metric: took 1m5.013001457s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0419 23:59:33.782907   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:33.884085   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:34.275364   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:34.383752   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:34.873957   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:34.883691   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:34.972846   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:35.274703   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:35.383465   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:35.774279   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:35.883343   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:36.275812   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:36.383568   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:36.774949   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:36.882957   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:37.275098   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:37.383034   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:37.477347   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:37.776010   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:38.228899   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:38.275127   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:38.386516   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:38.775544   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:38.884459   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:39.276956   84284 kapi.go:107] duration metric: took 1m12.509703827s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0419 23:59:39.382180   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:39.883727   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:39.973518   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:40.383207   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:40.885406   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:41.384229   84284 kapi.go:107] duration metric: took 1m11.004915495s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0419 23:59:41.385793   84284 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-903502 cluster.
	I0419 23:59:41.387129   84284 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0419 23:59:41.388380   84284 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0419 23:59:41.389753   84284 out.go:177] * Enabled addons: ingress-dns, default-storageclass, cloud-spanner, storage-provisioner, nvidia-device-plugin, metrics-server, helm-tiller, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0419 23:59:41.391126   84284 addons.go:505] duration metric: took 1m24.336108338s for enable addons: enabled=[ingress-dns default-storageclass cloud-spanner storage-provisioner nvidia-device-plugin metrics-server helm-tiller inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0419 23:59:42.472275   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:44.973215   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:47.470219   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:49.471469   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:51.472949   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:53.475400   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:55.971902   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:58.471149   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0420 00:00:00.471854   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0420 00:00:02.972137   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0420 00:00:05.478061   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0420 00:00:07.971796   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0420 00:00:10.472071   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0420 00:00:12.475990   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0420 00:00:14.475200   84284 pod_ready.go:92] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"True"
	I0420 00:00:14.475226   84284 pod_ready.go:81] duration metric: took 1m46.510887705s for pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace to be "Ready" ...
	I0420 00:00:14.475238   84284 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-gxtqp" in "kube-system" namespace to be "Ready" ...
	I0420 00:00:14.480368   84284 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-gxtqp" in "kube-system" namespace has status "Ready":"True"
	I0420 00:00:14.480388   84284 pod_ready.go:81] duration metric: took 5.143341ms for pod "nvidia-device-plugin-daemonset-gxtqp" in "kube-system" namespace to be "Ready" ...
	I0420 00:00:14.480406   84284 pod_ready.go:38] duration metric: took 1m47.695787345s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 00:00:14.480426   84284 api_server.go:52] waiting for apiserver process to appear ...
	I0420 00:00:14.480471   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 00:00:14.480540   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 00:00:14.533066   84284 cri.go:89] found id: "330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46"
	I0420 00:00:14.533098   84284 cri.go:89] found id: ""
	I0420 00:00:14.533109   84284 logs.go:276] 1 containers: [330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46]
	I0420 00:00:14.533168   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:14.538162   84284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 00:00:14.538237   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 00:00:14.598791   84284 cri.go:89] found id: "274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd"
	I0420 00:00:14.598817   84284 cri.go:89] found id: ""
	I0420 00:00:14.598825   84284 logs.go:276] 1 containers: [274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd]
	I0420 00:00:14.598873   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:14.604201   84284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 00:00:14.604279   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 00:00:14.655424   84284 cri.go:89] found id: "7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8"
	I0420 00:00:14.655447   84284 cri.go:89] found id: ""
	I0420 00:00:14.655455   84284 logs.go:276] 1 containers: [7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8]
	I0420 00:00:14.655502   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:14.660870   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 00:00:14.660936   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 00:00:14.708699   84284 cri.go:89] found id: "f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf"
	I0420 00:00:14.708733   84284 cri.go:89] found id: ""
	I0420 00:00:14.708743   84284 logs.go:276] 1 containers: [f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf]
	I0420 00:00:14.708808   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:14.713454   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 00:00:14.713522   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 00:00:14.759427   84284 cri.go:89] found id: "83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751"
	I0420 00:00:14.759454   84284 cri.go:89] found id: ""
	I0420 00:00:14.759464   84284 logs.go:276] 1 containers: [83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751]
	I0420 00:00:14.759523   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:14.764711   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 00:00:14.764781   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 00:00:14.828404   84284 cri.go:89] found id: "4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a"
	I0420 00:00:14.828433   84284 cri.go:89] found id: ""
	I0420 00:00:14.828444   84284 logs.go:276] 1 containers: [4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a]
	I0420 00:00:14.828500   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:14.834365   84284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 00:00:14.834419   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 00:00:14.886752   84284 cri.go:89] found id: ""
	I0420 00:00:14.886783   84284 logs.go:276] 0 containers: []
	W0420 00:00:14.886792   84284 logs.go:278] No container was found matching "kindnet"
	I0420 00:00:14.886802   84284 logs.go:123] Gathering logs for dmesg ...
	I0420 00:00:14.886819   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 00:00:14.904150   84284 logs.go:123] Gathering logs for coredns [7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8] ...
	I0420 00:00:14.904188   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8"
	I0420 00:00:14.947152   84284 logs.go:123] Gathering logs for kube-scheduler [f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf] ...
	I0420 00:00:14.947187   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf"
	I0420 00:00:15.002932   84284 logs.go:123] Gathering logs for kube-proxy [83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751] ...
	I0420 00:00:15.002967   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751"
	I0420 00:00:15.050711   84284 logs.go:123] Gathering logs for container status ...
	I0420 00:00:15.050744   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 00:00:15.132837   84284 logs.go:123] Gathering logs for kubelet ...
	I0420 00:00:15.132875   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 00:00:15.220983   84284 logs.go:123] Gathering logs for kube-apiserver [330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46] ...
	I0420 00:00:15.221024   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46"
	I0420 00:00:15.276789   84284 logs.go:123] Gathering logs for etcd [274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd] ...
	I0420 00:00:15.276833   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd"
	I0420 00:00:15.351358   84284 logs.go:123] Gathering logs for kube-controller-manager [4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a] ...
	I0420 00:00:15.351399   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a"
	I0420 00:00:15.420349   84284 logs.go:123] Gathering logs for CRI-O ...
	I0420 00:00:15.420391   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 00:00:16.198732   84284 logs.go:123] Gathering logs for describe nodes ...
	I0420 00:00:16.198780   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 00:00:18.846813   84284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:00:18.871529   84284 api_server.go:72] duration metric: took 2m1.816594549s to wait for apiserver process to appear ...
	I0420 00:00:18.871560   84284 api_server.go:88] waiting for apiserver healthz status ...
	I0420 00:00:18.871600   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 00:00:18.871660   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 00:00:18.918636   84284 cri.go:89] found id: "330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46"
	I0420 00:00:18.918672   84284 cri.go:89] found id: ""
	I0420 00:00:18.918684   84284 logs.go:276] 1 containers: [330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46]
	I0420 00:00:18.918756   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:18.923759   84284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 00:00:18.923849   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 00:00:18.980291   84284 cri.go:89] found id: "274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd"
	I0420 00:00:18.980330   84284 cri.go:89] found id: ""
	I0420 00:00:18.980343   84284 logs.go:276] 1 containers: [274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd]
	I0420 00:00:18.980414   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:18.987749   84284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 00:00:18.987831   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 00:00:19.032315   84284 cri.go:89] found id: "7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8"
	I0420 00:00:19.032351   84284 cri.go:89] found id: ""
	I0420 00:00:19.032364   84284 logs.go:276] 1 containers: [7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8]
	I0420 00:00:19.032436   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:19.037059   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 00:00:19.037126   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 00:00:19.081031   84284 cri.go:89] found id: "f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf"
	I0420 00:00:19.081059   84284 cri.go:89] found id: ""
	I0420 00:00:19.081068   84284 logs.go:276] 1 containers: [f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf]
	I0420 00:00:19.081123   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:19.086941   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 00:00:19.087032   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 00:00:19.133890   84284 cri.go:89] found id: "83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751"
	I0420 00:00:19.133922   84284 cri.go:89] found id: ""
	I0420 00:00:19.133933   84284 logs.go:276] 1 containers: [83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751]
	I0420 00:00:19.133995   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:19.138910   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 00:00:19.138989   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 00:00:19.197648   84284 cri.go:89] found id: "4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a"
	I0420 00:00:19.197674   84284 cri.go:89] found id: ""
	I0420 00:00:19.197687   84284 logs.go:276] 1 containers: [4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a]
	I0420 00:00:19.197750   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:19.203824   84284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 00:00:19.203895   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 00:00:19.248219   84284 cri.go:89] found id: ""
	I0420 00:00:19.248248   84284 logs.go:276] 0 containers: []
	W0420 00:00:19.248256   84284 logs.go:278] No container was found matching "kindnet"
	I0420 00:00:19.248265   84284 logs.go:123] Gathering logs for kube-apiserver [330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46] ...
	I0420 00:00:19.248278   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46"
	I0420 00:00:19.299308   84284 logs.go:123] Gathering logs for etcd [274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd] ...
	I0420 00:00:19.299343   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd"
	I0420 00:00:19.375220   84284 logs.go:123] Gathering logs for coredns [7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8] ...
	I0420 00:00:19.375253   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8"
	I0420 00:00:19.426768   84284 logs.go:123] Gathering logs for kube-proxy [83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751] ...
	I0420 00:00:19.426798   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751"
	I0420 00:00:19.483729   84284 logs.go:123] Gathering logs for kube-controller-manager [4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a] ...
	I0420 00:00:19.483765   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a"
	I0420 00:00:19.564014   84284 logs.go:123] Gathering logs for container status ...
	I0420 00:00:19.564056   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 00:00:19.631780   84284 logs.go:123] Gathering logs for kubelet ...
	I0420 00:00:19.631825   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 00:00:19.712091   84284 logs.go:123] Gathering logs for describe nodes ...
	I0420 00:00:19.712129   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 00:00:19.850379   84284 logs.go:123] Gathering logs for CRI-O ...
	I0420 00:00:19.850416   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 00:00:20.705535   84284 logs.go:123] Gathering logs for dmesg ...
	I0420 00:00:20.705582   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 00:00:20.722071   84284 logs.go:123] Gathering logs for kube-scheduler [f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf] ...
	I0420 00:00:20.722114   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf"
	I0420 00:00:23.279184   84284 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0420 00:00:23.284925   84284 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0420 00:00:23.286183   84284 api_server.go:141] control plane version: v1.30.0
	I0420 00:00:23.286206   84284 api_server.go:131] duration metric: took 4.414639406s to wait for apiserver health ...
	I0420 00:00:23.286214   84284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 00:00:23.286239   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 00:00:23.286287   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 00:00:23.346161   84284 cri.go:89] found id: "330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46"
	I0420 00:00:23.346191   84284 cri.go:89] found id: ""
	I0420 00:00:23.346202   84284 logs.go:276] 1 containers: [330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46]
	I0420 00:00:23.346261   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:23.355643   84284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 00:00:23.355709   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 00:00:23.407697   84284 cri.go:89] found id: "274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd"
	I0420 00:00:23.407727   84284 cri.go:89] found id: ""
	I0420 00:00:23.407738   84284 logs.go:276] 1 containers: [274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd]
	I0420 00:00:23.407815   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:23.412726   84284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 00:00:23.412795   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 00:00:23.454636   84284 cri.go:89] found id: "7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8"
	I0420 00:00:23.454664   84284 cri.go:89] found id: ""
	I0420 00:00:23.454675   84284 logs.go:276] 1 containers: [7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8]
	I0420 00:00:23.454728   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:23.459915   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 00:00:23.459992   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 00:00:23.535256   84284 cri.go:89] found id: "f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf"
	I0420 00:00:23.535282   84284 cri.go:89] found id: ""
	I0420 00:00:23.535290   84284 logs.go:276] 1 containers: [f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf]
	I0420 00:00:23.535343   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:23.540622   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 00:00:23.540688   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 00:00:23.580641   84284 cri.go:89] found id: "83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751"
	I0420 00:00:23.580669   84284 cri.go:89] found id: ""
	I0420 00:00:23.580678   84284 logs.go:276] 1 containers: [83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751]
	I0420 00:00:23.580732   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:23.587121   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 00:00:23.587213   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 00:00:23.630223   84284 cri.go:89] found id: "4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a"
	I0420 00:00:23.630248   84284 cri.go:89] found id: ""
	I0420 00:00:23.630258   84284 logs.go:276] 1 containers: [4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a]
	I0420 00:00:23.630316   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:23.637654   84284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 00:00:23.637730   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 00:00:23.689237   84284 cri.go:89] found id: ""
	I0420 00:00:23.689274   84284 logs.go:276] 0 containers: []
	W0420 00:00:23.689294   84284 logs.go:278] No container was found matching "kindnet"
	I0420 00:00:23.689316   84284 logs.go:123] Gathering logs for etcd [274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd] ...
	I0420 00:00:23.689336   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd"
	I0420 00:00:23.757462   84284 logs.go:123] Gathering logs for coredns [7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8] ...
	I0420 00:00:23.757501   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8"
	I0420 00:00:23.799690   84284 logs.go:123] Gathering logs for kube-scheduler [f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf] ...
	I0420 00:00:23.799730   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf"
	I0420 00:00:23.853507   84284 logs.go:123] Gathering logs for kube-proxy [83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751] ...
	I0420 00:00:23.853543   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751"
	I0420 00:00:23.895549   84284 logs.go:123] Gathering logs for kube-controller-manager [4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a] ...
	I0420 00:00:23.895586   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a"
	I0420 00:00:23.968131   84284 logs.go:123] Gathering logs for kubelet ...
	I0420 00:00:23.968167   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 00:00:24.054589   84284 logs.go:123] Gathering logs for dmesg ...
	I0420 00:00:24.054625   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 00:00:24.072058   84284 logs.go:123] Gathering logs for describe nodes ...
	I0420 00:00:24.072087   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 00:00:24.234444   84284 logs.go:123] Gathering logs for kube-apiserver [330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46] ...
	I0420 00:00:24.234479   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46"
	I0420 00:00:24.284213   84284 logs.go:123] Gathering logs for CRI-O ...
	I0420 00:00:24.284243   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 00:00:25.146073   84284 logs.go:123] Gathering logs for container status ...
	I0420 00:00:25.146117   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 00:00:27.714469   84284 system_pods.go:59] 18 kube-system pods found
	I0420 00:00:27.714511   84284 system_pods.go:61] "coredns-7db6d8ff4d-tjjdl" [a4aaa144-7e87-4738-955d-cee58d25f65f] Running
	I0420 00:00:27.714518   84284 system_pods.go:61] "csi-hostpath-attacher-0" [80ec45ea-7278-4269-9a2b-95c17a5d8905] Running
	I0420 00:00:27.714522   84284 system_pods.go:61] "csi-hostpath-resizer-0" [62a92217-a4e5-4d4d-a1c0-0e3dde22b693] Running
	I0420 00:00:27.714526   84284 system_pods.go:61] "csi-hostpathplugin-cgkxc" [6e8794fe-4529-45f6-8265-b00805d2c5a6] Running
	I0420 00:00:27.714529   84284 system_pods.go:61] "etcd-addons-903502" [a4da6d01-6f9d-4dd7-8e91-d2d164a6f2b5] Running
	I0420 00:00:27.714532   84284 system_pods.go:61] "kube-apiserver-addons-903502" [e70811f3-57b8-4426-b8d6-8dba77808da5] Running
	I0420 00:00:27.714537   84284 system_pods.go:61] "kube-controller-manager-addons-903502" [017deffc-9e48-44c2-83a1-4d8d10b865b2] Running
	I0420 00:00:27.714543   84284 system_pods.go:61] "kube-ingress-dns-minikube" [abc6ceb0-2bb9-4edd-ae34-8021b81671b4] Running
	I0420 00:00:27.714548   84284 system_pods.go:61] "kube-proxy-v7nxm" [f33a980c-c758-4488-86c4-3a4bc3c54cb7] Running
	I0420 00:00:27.714553   84284 system_pods.go:61] "kube-scheduler-addons-903502" [6d9b73d1-7d8d-4b0c-8a23-c0de4c831552] Running
	I0420 00:00:27.714560   84284 system_pods.go:61] "metrics-server-c59844bb4-msq6m" [9f348eb7-76f5-4a36-ad8b-50129a6f3ddf] Running
	I0420 00:00:27.714566   84284 system_pods.go:61] "nvidia-device-plugin-daemonset-gxtqp" [e35a27ed-f4cb-4e7f-a1c3-b0ddcc6c2546] Running
	I0420 00:00:27.714575   84284 system_pods.go:61] "registry-proxy-jstzq" [f7e2cb22-44fa-4141-9d32-90e8315b38f4] Running
	I0420 00:00:27.714580   84284 system_pods.go:61] "registry-qdwvn" [35c4ac3f-fc00-413c-b0e4-a411f7888bf5] Running
	I0420 00:00:27.714593   84284 system_pods.go:61] "snapshot-controller-745499f584-bpl6d" [8f69b6ef-f9a0-42dc-844f-713828861953] Running
	I0420 00:00:27.714596   84284 system_pods.go:61] "snapshot-controller-745499f584-jzsfg" [e779f2a2-0b40-4e3f-9cd5-646fcc84205e] Running
	I0420 00:00:27.714599   84284 system_pods.go:61] "storage-provisioner" [caace344-c304-4889-a0a2-41479039397a] Running
	I0420 00:00:27.714602   84284 system_pods.go:61] "tiller-deploy-6677d64bcd-cjckf" [9d3c558e-6fdb-4a44-b71f-4353e1043b27] Running
	I0420 00:00:27.714611   84284 system_pods.go:74] duration metric: took 4.428391332s to wait for pod list to return data ...
	I0420 00:00:27.714621   84284 default_sa.go:34] waiting for default service account to be created ...
	I0420 00:00:27.716997   84284 default_sa.go:45] found service account: "default"
	I0420 00:00:27.717024   84284 default_sa.go:55] duration metric: took 2.394503ms for default service account to be created ...
	I0420 00:00:27.717034   84284 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 00:00:27.725403   84284 system_pods.go:86] 18 kube-system pods found
	I0420 00:00:27.725425   84284 system_pods.go:89] "coredns-7db6d8ff4d-tjjdl" [a4aaa144-7e87-4738-955d-cee58d25f65f] Running
	I0420 00:00:27.725431   84284 system_pods.go:89] "csi-hostpath-attacher-0" [80ec45ea-7278-4269-9a2b-95c17a5d8905] Running
	I0420 00:00:27.725435   84284 system_pods.go:89] "csi-hostpath-resizer-0" [62a92217-a4e5-4d4d-a1c0-0e3dde22b693] Running
	I0420 00:00:27.725440   84284 system_pods.go:89] "csi-hostpathplugin-cgkxc" [6e8794fe-4529-45f6-8265-b00805d2c5a6] Running
	I0420 00:00:27.725443   84284 system_pods.go:89] "etcd-addons-903502" [a4da6d01-6f9d-4dd7-8e91-d2d164a6f2b5] Running
	I0420 00:00:27.725448   84284 system_pods.go:89] "kube-apiserver-addons-903502" [e70811f3-57b8-4426-b8d6-8dba77808da5] Running
	I0420 00:00:27.725452   84284 system_pods.go:89] "kube-controller-manager-addons-903502" [017deffc-9e48-44c2-83a1-4d8d10b865b2] Running
	I0420 00:00:27.725457   84284 system_pods.go:89] "kube-ingress-dns-minikube" [abc6ceb0-2bb9-4edd-ae34-8021b81671b4] Running
	I0420 00:00:27.725460   84284 system_pods.go:89] "kube-proxy-v7nxm" [f33a980c-c758-4488-86c4-3a4bc3c54cb7] Running
	I0420 00:00:27.725464   84284 system_pods.go:89] "kube-scheduler-addons-903502" [6d9b73d1-7d8d-4b0c-8a23-c0de4c831552] Running
	I0420 00:00:27.725468   84284 system_pods.go:89] "metrics-server-c59844bb4-msq6m" [9f348eb7-76f5-4a36-ad8b-50129a6f3ddf] Running
	I0420 00:00:27.725473   84284 system_pods.go:89] "nvidia-device-plugin-daemonset-gxtqp" [e35a27ed-f4cb-4e7f-a1c3-b0ddcc6c2546] Running
	I0420 00:00:27.725480   84284 system_pods.go:89] "registry-proxy-jstzq" [f7e2cb22-44fa-4141-9d32-90e8315b38f4] Running
	I0420 00:00:27.725485   84284 system_pods.go:89] "registry-qdwvn" [35c4ac3f-fc00-413c-b0e4-a411f7888bf5] Running
	I0420 00:00:27.725491   84284 system_pods.go:89] "snapshot-controller-745499f584-bpl6d" [8f69b6ef-f9a0-42dc-844f-713828861953] Running
	I0420 00:00:27.725495   84284 system_pods.go:89] "snapshot-controller-745499f584-jzsfg" [e779f2a2-0b40-4e3f-9cd5-646fcc84205e] Running
	I0420 00:00:27.725501   84284 system_pods.go:89] "storage-provisioner" [caace344-c304-4889-a0a2-41479039397a] Running
	I0420 00:00:27.725505   84284 system_pods.go:89] "tiller-deploy-6677d64bcd-cjckf" [9d3c558e-6fdb-4a44-b71f-4353e1043b27] Running
	I0420 00:00:27.725513   84284 system_pods.go:126] duration metric: took 8.472084ms to wait for k8s-apps to be running ...
	I0420 00:00:27.725521   84284 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 00:00:27.725563   84284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:00:27.745095   84284 system_svc.go:56] duration metric: took 19.563292ms WaitForService to wait for kubelet
	I0420 00:00:27.745129   84284 kubeadm.go:576] duration metric: took 2m10.69020034s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:00:27.745160   84284 node_conditions.go:102] verifying NodePressure condition ...
	I0420 00:00:27.748692   84284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 00:00:27.748730   84284 node_conditions.go:123] node cpu capacity is 2
	I0420 00:00:27.748748   84284 node_conditions.go:105] duration metric: took 3.581269ms to run NodePressure ...
	I0420 00:00:27.748764   84284 start.go:240] waiting for startup goroutines ...
	I0420 00:00:27.748775   84284 start.go:245] waiting for cluster config update ...
	I0420 00:00:27.748805   84284 start.go:254] writing updated cluster config ...
	I0420 00:00:27.749204   84284 ssh_runner.go:195] Run: rm -f paused
	I0420 00:00:27.800243   84284 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 00:00:27.803122   84284 out.go:177] * Done! kubectl is now configured to use "addons-903502" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.785628952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713571432785602399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1196f7a9-e5aa-442a-98b7-250fc1e2266b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.786361703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd8cac21-dfe7-4311-8d6c-ce4a11c0f4d8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.786419853Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd8cac21-dfe7-4311-8d6c-ce4a11c0f4d8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.786962111Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:57ebb4f9b3465ac6bddcb74b05b945e17d0ff577bceb4a737d7bf7255d93186c,PodSandboxId:9d5e95805281495deff9daa26045d03027ae44c4971eb06490359c495e4f5f42,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713571424140818049,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-wkhlc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35cf4ddc-6019-4f53-9d02-615978016068,},Annotations:map[string]string{io.kubernetes.container.hash: feff1119,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3f21862f8c129979ab35d893169683ea68ead0638e90c0722fce0c73f4e82b,PodSandboxId:a613c505e45e70d66e353380844832346ffdc1172a22ccad870d484bfdd05c4d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713571282871321373,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 357d421a-b251-4370-be01-0a523ab9c08b,},Annotations:map[string]string{io.kubern
etes.container.hash: f67037d2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c26faebc4b3bf023a1efb533b013bf709d695ea0d4c52ac3a17be5fd7a4e816,PodSandboxId:85b1ea0fd72f3a54468b4b5971b1a2f8342dab06f704689d889b69b2fda02d90,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713571273928126305,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-g8dbz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 6faf5229-91df-43d8-9dc0-e15e7d5d5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: b8c9b944,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f609f34145ab572adcef26b266701098292d54c4f0a572f46fa68f71682bac16,PodSandboxId:66d2abdb62b3c1be32c1f751ede49d5faf0bcae3e0eeb37c7f8767580fe35796,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713571180822102530,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9gbc6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 7a9b92ac-2ebd-421d-bbb6-1554362125aa,},Annotations:map[string]string{io.kubernetes.container.hash: 9378e5d5,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e4dea68045f62bdbd048b95dcca8d0dbc1469b849a34db2b69deba0e26c0fd,PodSandboxId:c898d59407bfc06233da2a2338135e41f0a1d74fe080be6103560cf2ace6a56b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713571158082185500,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dnt6r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de775d00-997b-4e13-a5c6-7c639f3b341b,},Annotations:map[string]string{io.kubernetes.container.hash: 11787993,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de05ed681bb6a57b83ccd6c4210b3dd862c2b02ffcc1ce2fe4359c024bab23e,PodSandboxId:4dce6e3069d14460ed4cc9104282a528922f4f79d0944799fc607d0d17d5e2d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1713571157579153783,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gnf8j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7499a3ec-d5d2-4dea-8379-3b7e66590849,},Annotations:map[string]string{io.kubernetes.container.hash: 23f30e7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8457d0a5f0677498881a6f8ed64886b7b7a9f17340fede9e63acdd1466ef980,PodSandboxId:0561fc45fce8405e200d4a51d97d2548cd2115660767668d08baff5c28632779,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1713571147694080648,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-s6wnr,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 63506f40-47b2-404e-bcd0-27cca6d4d119,},Annotations:map[string]string{io.kubernetes.container.hash: 30628b4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db50b237e66bf4f0646b0b5b8ecdd4e9efc10ec22d644f2f5be65ad98a75a58,PodSandboxId:40599943e203fb70ba9515031a437b67958cba7500102f14e9cdcf3b5d43aa18,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713571142763194284,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-msq6m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f348eb7-76f5-4a36-ad8b-50129a6f3ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7cf829,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c17cbbe53f6a2c6b01ab722c9ae75e6c5f54addf820a6635079d21e7009d46,PodSandboxId:0c833c9ada6406f9f52d78dc87044b61e3309b5e6456392b02078ba1af2aefdd,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k
8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_EXITED,CreatedAt:1713571124689290878,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abc6ceb0-2bb9-4edd-ae34-8021b81671b4,},Annotations:map[string]string{io.kubernetes.container.hash: 8d073f57,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e4688c82da65059d9a08a942a8551baa3a5acdc3f429353f2be3869643e4d,PodSandboxId:1276d1f0cc15aedbc60131df62
fc93f4d3398dccd0ed85722a91f0a51801c072,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713571103582498012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caace344-c304-4889-a0a2-41479039397a,},Annotations:map[string]string{io.kubernetes.container.hash: b4444dce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8,PodSandboxId:7e23cc76801eeee57244c4784caa87ddc1a3d0
205075ec542b364d1197ce169a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713571100282476208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tjjdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4aaa144-7e87-4738-955d-cee58d25f65f,},Annotations:map[string]string{io.kubernetes.container.hash: e0aecade,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751,PodSandboxId:3834c06220ceba09220f99e8974219dabbe7a0ffb4ab70a35c9426246934feb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713571097572994107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v7nxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33a980c-c758-4488-86c4-3a4bc3c54cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 1098c3b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd,PodSandboxId:f2d1b81b05076a763a243ac4c2f16a165645f6c9871e506e0ad7a5d40771b925,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713571078196105180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 122d99b3b4eb697aeba820b61e795f94,},Annotations:map[string]string{io.kubernetes.container.hash: 19040de4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f
9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf,PodSandboxId:ef417ed11323d70760c1e54fc77a896bf89bc5259ef9b0e243e2732dccd4b8d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713571078186187145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110af44b051f67941f6c46f65a3705d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6d95f01f0cc97d8b
d433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a,PodSandboxId:2705afab2635f0b42731216bc57bb8e16a1e067af70bbfebcaa22eb04cad9572,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713571078190456406,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f3cb68d60b5a6a1e91cd34f53de8f9,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:330349ef
d9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46,PodSandboxId:244d50fc56b54386c25ecd6ab2a8692c239c10531eeeef93f6e5f7356aa465e4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713571078101240143,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f8ef1e46b514290a289b45fa916a37,},Annotations:map[string]string{io.kubernetes.container.hash: 3c32be39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.
go:74" id=fd8cac21-dfe7-4311-8d6c-ce4a11c0f4d8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.831918207Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7d7f35c-af8a-4a94-8d68-8361aebcfeb5 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.832053072Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7d7f35c-af8a-4a94-8d68-8361aebcfeb5 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.833411579Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ef808d56-b881-45ba-a276-96e8efd377f4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.835058423Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713571432835031153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef808d56-b881-45ba-a276-96e8efd377f4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.836303150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa24ff30-098a-4d4e-88f8-ab9d75ec4a97 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.836353347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa24ff30-098a-4d4e-88f8-ab9d75ec4a97 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.836941110Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:57ebb4f9b3465ac6bddcb74b05b945e17d0ff577bceb4a737d7bf7255d93186c,PodSandboxId:9d5e95805281495deff9daa26045d03027ae44c4971eb06490359c495e4f5f42,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713571424140818049,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-wkhlc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35cf4ddc-6019-4f53-9d02-615978016068,},Annotations:map[string]string{io.kubernetes.container.hash: feff1119,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3f21862f8c129979ab35d893169683ea68ead0638e90c0722fce0c73f4e82b,PodSandboxId:a613c505e45e70d66e353380844832346ffdc1172a22ccad870d484bfdd05c4d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713571282871321373,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 357d421a-b251-4370-be01-0a523ab9c08b,},Annotations:map[string]string{io.kubern
etes.container.hash: f67037d2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c26faebc4b3bf023a1efb533b013bf709d695ea0d4c52ac3a17be5fd7a4e816,PodSandboxId:85b1ea0fd72f3a54468b4b5971b1a2f8342dab06f704689d889b69b2fda02d90,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713571273928126305,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-g8dbz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 6faf5229-91df-43d8-9dc0-e15e7d5d5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: b8c9b944,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f609f34145ab572adcef26b266701098292d54c4f0a572f46fa68f71682bac16,PodSandboxId:66d2abdb62b3c1be32c1f751ede49d5faf0bcae3e0eeb37c7f8767580fe35796,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713571180822102530,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9gbc6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 7a9b92ac-2ebd-421d-bbb6-1554362125aa,},Annotations:map[string]string{io.kubernetes.container.hash: 9378e5d5,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e4dea68045f62bdbd048b95dcca8d0dbc1469b849a34db2b69deba0e26c0fd,PodSandboxId:c898d59407bfc06233da2a2338135e41f0a1d74fe080be6103560cf2ace6a56b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713571158082185500,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dnt6r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de775d00-997b-4e13-a5c6-7c639f3b341b,},Annotations:map[string]string{io.kubernetes.container.hash: 11787993,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de05ed681bb6a57b83ccd6c4210b3dd862c2b02ffcc1ce2fe4359c024bab23e,PodSandboxId:4dce6e3069d14460ed4cc9104282a528922f4f79d0944799fc607d0d17d5e2d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1713571157579153783,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gnf8j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7499a3ec-d5d2-4dea-8379-3b7e66590849,},Annotations:map[string]string{io.kubernetes.container.hash: 23f30e7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8457d0a5f0677498881a6f8ed64886b7b7a9f17340fede9e63acdd1466ef980,PodSandboxId:0561fc45fce8405e200d4a51d97d2548cd2115660767668d08baff5c28632779,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1713571147694080648,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-s6wnr,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 63506f40-47b2-404e-bcd0-27cca6d4d119,},Annotations:map[string]string{io.kubernetes.container.hash: 30628b4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db50b237e66bf4f0646b0b5b8ecdd4e9efc10ec22d644f2f5be65ad98a75a58,PodSandboxId:40599943e203fb70ba9515031a437b67958cba7500102f14e9cdcf3b5d43aa18,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713571142763194284,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-msq6m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f348eb7-76f5-4a36-ad8b-50129a6f3ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7cf829,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c17cbbe53f6a2c6b01ab722c9ae75e6c5f54addf820a6635079d21e7009d46,PodSandboxId:0c833c9ada6406f9f52d78dc87044b61e3309b5e6456392b02078ba1af2aefdd,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k
8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_EXITED,CreatedAt:1713571124689290878,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abc6ceb0-2bb9-4edd-ae34-8021b81671b4,},Annotations:map[string]string{io.kubernetes.container.hash: 8d073f57,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e4688c82da65059d9a08a942a8551baa3a5acdc3f429353f2be3869643e4d,PodSandboxId:1276d1f0cc15aedbc60131df62
fc93f4d3398dccd0ed85722a91f0a51801c072,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713571103582498012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caace344-c304-4889-a0a2-41479039397a,},Annotations:map[string]string{io.kubernetes.container.hash: b4444dce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8,PodSandboxId:7e23cc76801eeee57244c4784caa87ddc1a3d0
205075ec542b364d1197ce169a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713571100282476208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tjjdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4aaa144-7e87-4738-955d-cee58d25f65f,},Annotations:map[string]string{io.kubernetes.container.hash: e0aecade,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751,PodSandboxId:3834c06220ceba09220f99e8974219dabbe7a0ffb4ab70a35c9426246934feb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713571097572994107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v7nxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33a980c-c758-4488-86c4-3a4bc3c54cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 1098c3b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd,PodSandboxId:f2d1b81b05076a763a243ac4c2f16a165645f6c9871e506e0ad7a5d40771b925,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713571078196105180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 122d99b3b4eb697aeba820b61e795f94,},Annotations:map[string]string{io.kubernetes.container.hash: 19040de4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f
9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf,PodSandboxId:ef417ed11323d70760c1e54fc77a896bf89bc5259ef9b0e243e2732dccd4b8d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713571078186187145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110af44b051f67941f6c46f65a3705d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6d95f01f0cc97d8b
d433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a,PodSandboxId:2705afab2635f0b42731216bc57bb8e16a1e067af70bbfebcaa22eb04cad9572,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713571078190456406,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f3cb68d60b5a6a1e91cd34f53de8f9,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:330349ef
d9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46,PodSandboxId:244d50fc56b54386c25ecd6ab2a8692c239c10531eeeef93f6e5f7356aa465e4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713571078101240143,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f8ef1e46b514290a289b45fa916a37,},Annotations:map[string]string{io.kubernetes.container.hash: 3c32be39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.
go:74" id=aa24ff30-098a-4d4e-88f8-ab9d75ec4a97 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.880130144Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a8b9f6a-8378-4532-a27f-99e409deb9ca name=/runtime.v1.RuntimeService/Version
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.880208790Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a8b9f6a-8378-4532-a27f-99e409deb9ca name=/runtime.v1.RuntimeService/Version
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.882456453Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b17ff3a-75ab-4f9f-84aa-5d9ae53a4c6e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.883900911Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713571432883876277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b17ff3a-75ab-4f9f-84aa-5d9ae53a4c6e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.884947956Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0270c7a-1f86-4946-ba4f-9acb93d92c33 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.885002448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0270c7a-1f86-4946-ba4f-9acb93d92c33 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.885374798Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:57ebb4f9b3465ac6bddcb74b05b945e17d0ff577bceb4a737d7bf7255d93186c,PodSandboxId:9d5e95805281495deff9daa26045d03027ae44c4971eb06490359c495e4f5f42,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713571424140818049,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-wkhlc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35cf4ddc-6019-4f53-9d02-615978016068,},Annotations:map[string]string{io.kubernetes.container.hash: feff1119,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3f21862f8c129979ab35d893169683ea68ead0638e90c0722fce0c73f4e82b,PodSandboxId:a613c505e45e70d66e353380844832346ffdc1172a22ccad870d484bfdd05c4d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713571282871321373,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 357d421a-b251-4370-be01-0a523ab9c08b,},Annotations:map[string]string{io.kubern
etes.container.hash: f67037d2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c26faebc4b3bf023a1efb533b013bf709d695ea0d4c52ac3a17be5fd7a4e816,PodSandboxId:85b1ea0fd72f3a54468b4b5971b1a2f8342dab06f704689d889b69b2fda02d90,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713571273928126305,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-g8dbz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 6faf5229-91df-43d8-9dc0-e15e7d5d5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: b8c9b944,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f609f34145ab572adcef26b266701098292d54c4f0a572f46fa68f71682bac16,PodSandboxId:66d2abdb62b3c1be32c1f751ede49d5faf0bcae3e0eeb37c7f8767580fe35796,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713571180822102530,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9gbc6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 7a9b92ac-2ebd-421d-bbb6-1554362125aa,},Annotations:map[string]string{io.kubernetes.container.hash: 9378e5d5,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e4dea68045f62bdbd048b95dcca8d0dbc1469b849a34db2b69deba0e26c0fd,PodSandboxId:c898d59407bfc06233da2a2338135e41f0a1d74fe080be6103560cf2ace6a56b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713571158082185500,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dnt6r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de775d00-997b-4e13-a5c6-7c639f3b341b,},Annotations:map[string]string{io.kubernetes.container.hash: 11787993,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de05ed681bb6a57b83ccd6c4210b3dd862c2b02ffcc1ce2fe4359c024bab23e,PodSandboxId:4dce6e3069d14460ed4cc9104282a528922f4f79d0944799fc607d0d17d5e2d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1713571157579153783,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gnf8j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7499a3ec-d5d2-4dea-8379-3b7e66590849,},Annotations:map[string]string{io.kubernetes.container.hash: 23f30e7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8457d0a5f0677498881a6f8ed64886b7b7a9f17340fede9e63acdd1466ef980,PodSandboxId:0561fc45fce8405e200d4a51d97d2548cd2115660767668d08baff5c28632779,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1713571147694080648,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-s6wnr,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 63506f40-47b2-404e-bcd0-27cca6d4d119,},Annotations:map[string]string{io.kubernetes.container.hash: 30628b4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db50b237e66bf4f0646b0b5b8ecdd4e9efc10ec22d644f2f5be65ad98a75a58,PodSandboxId:40599943e203fb70ba9515031a437b67958cba7500102f14e9cdcf3b5d43aa18,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713571142763194284,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-msq6m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f348eb7-76f5-4a36-ad8b-50129a6f3ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7cf829,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c17cbbe53f6a2c6b01ab722c9ae75e6c5f54addf820a6635079d21e7009d46,PodSandboxId:0c833c9ada6406f9f52d78dc87044b61e3309b5e6456392b02078ba1af2aefdd,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k
8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_EXITED,CreatedAt:1713571124689290878,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abc6ceb0-2bb9-4edd-ae34-8021b81671b4,},Annotations:map[string]string{io.kubernetes.container.hash: 8d073f57,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e4688c82da65059d9a08a942a8551baa3a5acdc3f429353f2be3869643e4d,PodSandboxId:1276d1f0cc15aedbc60131df62
fc93f4d3398dccd0ed85722a91f0a51801c072,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713571103582498012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caace344-c304-4889-a0a2-41479039397a,},Annotations:map[string]string{io.kubernetes.container.hash: b4444dce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8,PodSandboxId:7e23cc76801eeee57244c4784caa87ddc1a3d0
205075ec542b364d1197ce169a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713571100282476208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tjjdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4aaa144-7e87-4738-955d-cee58d25f65f,},Annotations:map[string]string{io.kubernetes.container.hash: e0aecade,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751,PodSandboxId:3834c06220ceba09220f99e8974219dabbe7a0ffb4ab70a35c9426246934feb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713571097572994107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v7nxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33a980c-c758-4488-86c4-3a4bc3c54cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 1098c3b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd,PodSandboxId:f2d1b81b05076a763a243ac4c2f16a165645f6c9871e506e0ad7a5d40771b925,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713571078196105180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 122d99b3b4eb697aeba820b61e795f94,},Annotations:map[string]string{io.kubernetes.container.hash: 19040de4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f
9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf,PodSandboxId:ef417ed11323d70760c1e54fc77a896bf89bc5259ef9b0e243e2732dccd4b8d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713571078186187145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110af44b051f67941f6c46f65a3705d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6d95f01f0cc97d8b
d433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a,PodSandboxId:2705afab2635f0b42731216bc57bb8e16a1e067af70bbfebcaa22eb04cad9572,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713571078190456406,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f3cb68d60b5a6a1e91cd34f53de8f9,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:330349ef
d9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46,PodSandboxId:244d50fc56b54386c25ecd6ab2a8692c239c10531eeeef93f6e5f7356aa465e4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713571078101240143,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f8ef1e46b514290a289b45fa916a37,},Annotations:map[string]string{io.kubernetes.container.hash: 3c32be39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.
go:74" id=b0270c7a-1f86-4946-ba4f-9acb93d92c33 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.930300407Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=908f7f59-dda3-46e8-a2cc-0aa5d87156a5 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.930371525Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=908f7f59-dda3-46e8-a2cc-0aa5d87156a5 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.938100074Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13c10931-bc79-42ff-b679-73ec83d68f5d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.939716953Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713571432939687555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13c10931-bc79-42ff-b679-73ec83d68f5d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.940276749Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4306e135-1fd4-408e-8171-7b222a358ba0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.940331882Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4306e135-1fd4-408e-8171-7b222a358ba0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:03:52 addons-903502 crio[687]: time="2024-04-20 00:03:52.940720373Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:57ebb4f9b3465ac6bddcb74b05b945e17d0ff577bceb4a737d7bf7255d93186c,PodSandboxId:9d5e95805281495deff9daa26045d03027ae44c4971eb06490359c495e4f5f42,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713571424140818049,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-wkhlc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35cf4ddc-6019-4f53-9d02-615978016068,},Annotations:map[string]string{io.kubernetes.container.hash: feff1119,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3f21862f8c129979ab35d893169683ea68ead0638e90c0722fce0c73f4e82b,PodSandboxId:a613c505e45e70d66e353380844832346ffdc1172a22ccad870d484bfdd05c4d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713571282871321373,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 357d421a-b251-4370-be01-0a523ab9c08b,},Annotations:map[string]string{io.kubern
etes.container.hash: f67037d2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c26faebc4b3bf023a1efb533b013bf709d695ea0d4c52ac3a17be5fd7a4e816,PodSandboxId:85b1ea0fd72f3a54468b4b5971b1a2f8342dab06f704689d889b69b2fda02d90,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713571273928126305,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-g8dbz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 6faf5229-91df-43d8-9dc0-e15e7d5d5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: b8c9b944,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f609f34145ab572adcef26b266701098292d54c4f0a572f46fa68f71682bac16,PodSandboxId:66d2abdb62b3c1be32c1f751ede49d5faf0bcae3e0eeb37c7f8767580fe35796,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713571180822102530,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9gbc6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 7a9b92ac-2ebd-421d-bbb6-1554362125aa,},Annotations:map[string]string{io.kubernetes.container.hash: 9378e5d5,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e4dea68045f62bdbd048b95dcca8d0dbc1469b849a34db2b69deba0e26c0fd,PodSandboxId:c898d59407bfc06233da2a2338135e41f0a1d74fe080be6103560cf2ace6a56b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713571158082185500,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dnt6r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de775d00-997b-4e13-a5c6-7c639f3b341b,},Annotations:map[string]string{io.kubernetes.container.hash: 11787993,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de05ed681bb6a57b83ccd6c4210b3dd862c2b02ffcc1ce2fe4359c024bab23e,PodSandboxId:4dce6e3069d14460ed4cc9104282a528922f4f79d0944799fc607d0d17d5e2d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1713571157579153783,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gnf8j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7499a3ec-d5d2-4dea-8379-3b7e66590849,},Annotations:map[string]string{io.kubernetes.container.hash: 23f30e7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8457d0a5f0677498881a6f8ed64886b7b7a9f17340fede9e63acdd1466ef980,PodSandboxId:0561fc45fce8405e200d4a51d97d2548cd2115660767668d08baff5c28632779,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1713571147694080648,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-s6wnr,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 63506f40-47b2-404e-bcd0-27cca6d4d119,},Annotations:map[string]string{io.kubernetes.container.hash: 30628b4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db50b237e66bf4f0646b0b5b8ecdd4e9efc10ec22d644f2f5be65ad98a75a58,PodSandboxId:40599943e203fb70ba9515031a437b67958cba7500102f14e9cdcf3b5d43aa18,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713571142763194284,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-msq6m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f348eb7-76f5-4a36-ad8b-50129a6f3ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7cf829,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c17cbbe53f6a2c6b01ab722c9ae75e6c5f54addf820a6635079d21e7009d46,PodSandboxId:0c833c9ada6406f9f52d78dc87044b61e3309b5e6456392b02078ba1af2aefdd,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k
8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_EXITED,CreatedAt:1713571124689290878,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abc6ceb0-2bb9-4edd-ae34-8021b81671b4,},Annotations:map[string]string{io.kubernetes.container.hash: 8d073f57,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e4688c82da65059d9a08a942a8551baa3a5acdc3f429353f2be3869643e4d,PodSandboxId:1276d1f0cc15aedbc60131df62
fc93f4d3398dccd0ed85722a91f0a51801c072,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713571103582498012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caace344-c304-4889-a0a2-41479039397a,},Annotations:map[string]string{io.kubernetes.container.hash: b4444dce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8,PodSandboxId:7e23cc76801eeee57244c4784caa87ddc1a3d0
205075ec542b364d1197ce169a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713571100282476208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tjjdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4aaa144-7e87-4738-955d-cee58d25f65f,},Annotations:map[string]string{io.kubernetes.container.hash: e0aecade,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751,PodSandboxId:3834c06220ceba09220f99e8974219dabbe7a0ffb4ab70a35c9426246934feb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713571097572994107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v7nxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33a980c-c758-4488-86c4-3a4bc3c54cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 1098c3b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd,PodSandboxId:f2d1b81b05076a763a243ac4c2f16a165645f6c9871e506e0ad7a5d40771b925,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713571078196105180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 122d99b3b4eb697aeba820b61e795f94,},Annotations:map[string]string{io.kubernetes.container.hash: 19040de4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f
9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf,PodSandboxId:ef417ed11323d70760c1e54fc77a896bf89bc5259ef9b0e243e2732dccd4b8d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713571078186187145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110af44b051f67941f6c46f65a3705d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6d95f01f0cc97d8b
d433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a,PodSandboxId:2705afab2635f0b42731216bc57bb8e16a1e067af70bbfebcaa22eb04cad9572,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713571078190456406,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f3cb68d60b5a6a1e91cd34f53de8f9,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:330349ef
d9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46,PodSandboxId:244d50fc56b54386c25ecd6ab2a8692c239c10531eeeef93f6e5f7356aa465e4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713571078101240143,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f8ef1e46b514290a289b45fa916a37,},Annotations:map[string]string{io.kubernetes.container.hash: 3c32be39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.
go:74" id=4306e135-1fd4-408e-8171-7b222a358ba0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	57ebb4f9b3465       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   9d5e958052814       hello-world-app-86c47465fc-wkhlc
	bc3f21862f8c1       docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9                              2 minutes ago       Running             nginx                     0                   a613c505e45e7       nginx
	3c26faebc4b3b       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                        2 minutes ago       Running             headlamp                  0                   85b1ea0fd72f3       headlamp-7559bf459f-g8dbz
	f609f34145ab5       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 4 minutes ago       Running             gcp-auth                  0                   66d2abdb62b3c       gcp-auth-5db96cd9b4-9gbc6
	27e4dea68045f       b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135                                                             4 minutes ago       Exited              patch                     1                   c898d59407bfc       ingress-nginx-admission-patch-dnt6r
	1de05ed681bb6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   4 minutes ago       Exited              create                    0                   4dce6e3069d14       ingress-nginx-admission-create-gnf8j
	d8457d0a5f067       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   0561fc45fce84       yakd-dashboard-5ddbf7d777-s6wnr
	5db50b237e66b       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   40599943e203f       metrics-server-c59844bb4-msq6m
	91c17cbbe53f6       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f             5 minutes ago       Exited              minikube-ingress-dns      0                   0c833c9ada640       kube-ingress-dns-minikube
	aa3e4688c82da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   1276d1f0cc15a       storage-provisioner
	7e808ce1f4a89       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   7e23cc76801ee       coredns-7db6d8ff4d-tjjdl
	83fd02b669c84       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                             5 minutes ago       Running             kube-proxy                0                   3834c06220ceb       kube-proxy-v7nxm
	274269df8c392       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago       Running             etcd                      0                   f2d1b81b05076       etcd-addons-903502
	4e6d95f01f0cc       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                             5 minutes ago       Running             kube-controller-manager   0                   2705afab2635f       kube-controller-manager-addons-903502
	f9f32e140359d       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                             5 minutes ago       Running             kube-scheduler            0                   ef417ed11323d       kube-scheduler-addons-903502
	330349efd9863       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                             5 minutes ago       Running             kube-apiserver            0                   244d50fc56b54       kube-apiserver-addons-903502
	
	
	==> coredns [7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8] <==
	[INFO] 10.244.0.21:59358 - 7515 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000112478s
	[INFO] 10.244.0.21:55292 - 14168 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000138568s
	[INFO] 10.244.0.21:55292 - 41474 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000117015s
	[INFO] 10.244.0.21:59358 - 55490 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00021586s
	[INFO] 10.244.0.21:55292 - 4027 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073605s
	[INFO] 10.244.0.21:55292 - 59352 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000109523s
	[INFO] 10.244.0.21:59358 - 59637 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00007701s
	[INFO] 10.244.0.21:59358 - 51731 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064641s
	[INFO] 10.244.0.21:59358 - 5248 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000035522s
	[INFO] 10.244.0.21:59358 - 8218 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000022199s
	[INFO] 10.244.0.21:59358 - 32557 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000119549s
	[INFO] 10.244.0.21:39941 - 5877 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000090282s
	[INFO] 10.244.0.21:53271 - 34132 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00005708s
	[INFO] 10.244.0.21:53271 - 38080 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000064765s
	[INFO] 10.244.0.21:39941 - 6789 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042858s
	[INFO] 10.244.0.21:39941 - 39462 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051825s
	[INFO] 10.244.0.21:53271 - 14114 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035513s
	[INFO] 10.244.0.21:53271 - 63402 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000056631s
	[INFO] 10.244.0.21:39941 - 43697 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040294s
	[INFO] 10.244.0.21:39941 - 22299 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033932s
	[INFO] 10.244.0.21:53271 - 47411 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043473s
	[INFO] 10.244.0.21:39941 - 12862 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039329s
	[INFO] 10.244.0.21:53271 - 53853 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030856s
	[INFO] 10.244.0.21:39941 - 47299 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000401086s
	[INFO] 10.244.0.21:53271 - 64712 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000036631s
	
	
	==> describe nodes <==
	Name:               addons-903502
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-903502
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=addons-903502
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_19T23_58_03_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-903502
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 23:58:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-903502
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:03:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:01:37 +0000   Fri, 19 Apr 2024 23:57:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:01:37 +0000   Fri, 19 Apr 2024 23:57:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:01:37 +0000   Fri, 19 Apr 2024 23:57:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:01:37 +0000   Fri, 19 Apr 2024 23:58:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    addons-903502
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 55198a5b1c754d1096d6da668a60272d
	  System UUID:                55198a5b-1c75-4d10-96d6-da668a60272d
	  Boot ID:                    62e9bbf8-4361-43d8-8ce0-a64c7b22127d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-wkhlc         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-5db96cd9b4-9gbc6                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  headlamp                    headlamp-7559bf459f-g8dbz                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-7db6d8ff4d-tjjdl                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m37s
	  kube-system                 etcd-addons-903502                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m50s
	  kube-system                 kube-apiserver-addons-903502             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 kube-controller-manager-addons-903502    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 kube-proxy-v7nxm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m37s
	  kube-system                 kube-scheduler-addons-903502             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 metrics-server-c59844bb4-msq6m           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m30s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-s6wnr          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m34s  kube-proxy       
	  Normal  Starting                 5m50s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m50s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m50s  kubelet          Node addons-903502 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m50s  kubelet          Node addons-903502 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m50s  kubelet          Node addons-903502 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m49s  kubelet          Node addons-903502 status is now: NodeReady
	  Normal  RegisteredNode           5m38s  node-controller  Node addons-903502 event: Registered Node addons-903502 in Controller
	
	
	==> dmesg <==
	[  +8.319846] systemd-fstab-generator[1491]: Ignoring "noauto" option for root device
	[  +5.510517] kauditd_printk_skb: 108 callbacks suppressed
	[  +5.065068] kauditd_printk_skb: 120 callbacks suppressed
	[  +5.379025] kauditd_printk_skb: 99 callbacks suppressed
	[ +21.827827] kauditd_printk_skb: 9 callbacks suppressed
	[Apr19 23:59] kauditd_printk_skb: 30 callbacks suppressed
	[ +10.385889] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.067849] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.195877] kauditd_printk_skb: 63 callbacks suppressed
	[  +6.175336] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.305392] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.904694] kauditd_printk_skb: 31 callbacks suppressed
	[Apr20 00:00] kauditd_printk_skb: 24 callbacks suppressed
	[ +19.533955] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.189134] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.005053] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.237675] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.732277] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.091957] kauditd_printk_skb: 41 callbacks suppressed
	[Apr20 00:01] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.942783] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.784284] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.928818] kauditd_printk_skb: 27 callbacks suppressed
	[Apr20 00:03] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.528772] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd] <==
	{"level":"warn","ts":"2024-04-19T23:59:19.554692Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-19T23:59:19.052813Z","time spent":"501.823208ms","remote":"127.0.0.1:41242","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4435,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dnt6r\" mod_revision:1011 > success:<request_put:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dnt6r\" value_size:4363 >> failure:<request_range:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dnt6r\" > >"}
	{"level":"warn","ts":"2024-04-19T23:59:19.554984Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"294.150614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14646"}
	{"level":"info","ts":"2024-04-19T23:59:19.555037Z","caller":"traceutil/trace.go:171","msg":"trace[802504494] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1016; }","duration":"294.227431ms","start":"2024-04-19T23:59:19.260801Z","end":"2024-04-19T23:59:19.555028Z","steps":["trace[802504494] 'agreement among raft nodes before linearized reading'  (duration: 294.096268ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-19T23:59:19.555457Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.874886ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85388"}
	{"level":"info","ts":"2024-04-19T23:59:19.555593Z","caller":"traceutil/trace.go:171","msg":"trace[1945804714] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1016; }","duration":"117.104111ms","start":"2024-04-19T23:59:19.438481Z","end":"2024-04-19T23:59:19.555585Z","steps":["trace[1945804714] 'agreement among raft nodes before linearized reading'  (duration: 116.881244ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-19T23:59:19.556011Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.266209ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11161"}
	{"level":"info","ts":"2024-04-19T23:59:19.556443Z","caller":"traceutil/trace.go:171","msg":"trace[1101703132] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1016; }","duration":"185.720812ms","start":"2024-04-19T23:59:19.370712Z","end":"2024-04-19T23:59:19.556433Z","steps":["trace[1101703132] 'agreement among raft nodes before linearized reading'  (duration: 184.279416ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-19T23:59:34.859222Z","caller":"traceutil/trace.go:171","msg":"trace[1624491735] transaction","detail":"{read_only:false; response_revision:1136; number_of_response:1; }","duration":"135.293349ms","start":"2024-04-19T23:59:34.72391Z","end":"2024-04-19T23:59:34.859204Z","steps":["trace[1624491735] 'process raft request'  (duration: 134.824606ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-19T23:59:38.20733Z","caller":"traceutil/trace.go:171","msg":"trace[1231477436] linearizableReadLoop","detail":"{readStateIndex:1172; appliedIndex:1171; }","duration":"338.615765ms","start":"2024-04-19T23:59:37.8687Z","end":"2024-04-19T23:59:38.207316Z","steps":["trace[1231477436] 'read index received'  (duration: 338.362116ms)","trace[1231477436] 'applied index is now lower than readState.Index'  (duration: 253.049µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-19T23:59:38.207702Z","caller":"traceutil/trace.go:171","msg":"trace[29911588] transaction","detail":"{read_only:false; response_revision:1141; number_of_response:1; }","duration":"416.688652ms","start":"2024-04-19T23:59:37.790999Z","end":"2024-04-19T23:59:38.207688Z","steps":["trace[29911588] 'process raft request'  (duration: 416.102594ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-19T23:59:38.208858Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-19T23:59:37.790986Z","time spent":"417.816879ms","remote":"127.0.0.1:41220","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1140 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-19T23:59:38.207863Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"339.147147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11447"}
	{"level":"warn","ts":"2024-04-19T23:59:38.208668Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.913392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-msq6m\" ","response":"range_response_count:1 size:4459"}
	{"level":"info","ts":"2024-04-19T23:59:38.209684Z","caller":"traceutil/trace.go:171","msg":"trace[858521124] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-msq6m; range_end:; response_count:1; response_revision:1141; }","duration":"254.953183ms","start":"2024-04-19T23:59:37.954721Z","end":"2024-04-19T23:59:38.209674Z","steps":["trace[858521124] 'agreement among raft nodes before linearized reading'  (duration: 253.729065ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-19T23:59:38.209818Z","caller":"traceutil/trace.go:171","msg":"trace[1564542094] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1141; }","duration":"341.13803ms","start":"2024-04-19T23:59:37.868675Z","end":"2024-04-19T23:59:38.209813Z","steps":["trace[1564542094] 'agreement among raft nodes before linearized reading'  (duration: 339.059766ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-19T23:59:38.20984Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-19T23:59:37.868663Z","time spent":"341.168089ms","remote":"127.0.0.1:41242","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11469,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-04-19T23:59:38.208727Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.498902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-04-19T23:59:38.209906Z","caller":"traceutil/trace.go:171","msg":"trace[191841844] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1141; }","duration":"253.678081ms","start":"2024-04-19T23:59:37.956223Z","end":"2024-04-19T23:59:38.209902Z","steps":["trace[191841844] 'agreement among raft nodes before linearized reading'  (duration: 252.468045ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-19T23:59:43.059944Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.319466ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-msq6m\" ","response":"range_response_count:1 size:4459"}
	{"level":"info","ts":"2024-04-19T23:59:43.06006Z","caller":"traceutil/trace.go:171","msg":"trace[2128476879] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-msq6m; range_end:; response_count:1; response_revision:1182; }","duration":"105.565071ms","start":"2024-04-19T23:59:42.954483Z","end":"2024-04-19T23:59:43.060048Z","steps":["trace[2128476879] 'range keys from in-memory index tree'  (duration: 105.189716ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:00:46.434089Z","caller":"traceutil/trace.go:171","msg":"trace[182930177] transaction","detail":"{read_only:false; response_revision:1436; number_of_response:1; }","duration":"139.13784ms","start":"2024-04-20T00:00:46.294917Z","end":"2024-04-20T00:00:46.434054Z","steps":["trace[182930177] 'process raft request'  (duration: 139.005381ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:01:13.613903Z","caller":"traceutil/trace.go:171","msg":"trace[245126542] linearizableReadLoop","detail":"{readStateIndex:1660; appliedIndex:1659; }","duration":"172.188177ms","start":"2024-04-20T00:01:13.441676Z","end":"2024-04-20T00:01:13.613864Z","steps":["trace[245126542] 'read index received'  (duration: 172.07032ms)","trace[245126542] 'applied index is now lower than readState.Index'  (duration: 117.035µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-20T00:01:13.614215Z","caller":"traceutil/trace.go:171","msg":"trace[2116660004] transaction","detail":"{read_only:false; response_revision:1598; number_of_response:1; }","duration":"179.517516ms","start":"2024-04-20T00:01:13.434681Z","end":"2024-04-20T00:01:13.614199Z","steps":["trace[2116660004] 'process raft request'  (duration: 179.103168ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:01:13.614446Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.701396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3964"}
	{"level":"info","ts":"2024-04-20T00:01:13.614571Z","caller":"traceutil/trace.go:171","msg":"trace[2014175847] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1598; }","duration":"172.911378ms","start":"2024-04-20T00:01:13.441643Z","end":"2024-04-20T00:01:13.614555Z","steps":["trace[2014175847] 'agreement among raft nodes before linearized reading'  (duration: 172.67536ms)"],"step_count":1}
	
	
	==> gcp-auth [f609f34145ab572adcef26b266701098292d54c4f0a572f46fa68f71682bac16] <==
	2024/04/20 00:00:28 Ready to write response ...
	2024/04/20 00:00:28 Ready to marshal response ...
	2024/04/20 00:00:28 Ready to write response ...
	2024/04/20 00:00:35 Ready to marshal response ...
	2024/04/20 00:00:35 Ready to write response ...
	2024/04/20 00:00:38 Ready to marshal response ...
	2024/04/20 00:00:38 Ready to write response ...
	2024/04/20 00:00:42 Ready to marshal response ...
	2024/04/20 00:00:42 Ready to write response ...
	2024/04/20 00:00:50 Ready to marshal response ...
	2024/04/20 00:00:50 Ready to write response ...
	2024/04/20 00:00:55 Ready to marshal response ...
	2024/04/20 00:00:55 Ready to write response ...
	2024/04/20 00:01:07 Ready to marshal response ...
	2024/04/20 00:01:07 Ready to write response ...
	2024/04/20 00:01:07 Ready to marshal response ...
	2024/04/20 00:01:07 Ready to write response ...
	2024/04/20 00:01:07 Ready to marshal response ...
	2024/04/20 00:01:07 Ready to write response ...
	2024/04/20 00:01:16 Ready to marshal response ...
	2024/04/20 00:01:16 Ready to write response ...
	2024/04/20 00:01:19 Ready to marshal response ...
	2024/04/20 00:01:19 Ready to write response ...
	2024/04/20 00:03:41 Ready to marshal response ...
	2024/04/20 00:03:41 Ready to write response ...
	
	
	==> kernel <==
	 00:03:53 up 6 min,  0 users,  load average: 1.08, 1.91, 1.06
	Linux addons-903502 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46] <==
	E0420 00:00:14.137069       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	E0420 00:00:51.706659       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0420 00:00:52.778480       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:60240: use of closed network connection
	I0420 00:00:55.227909       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0420 00:00:56.838442       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:60954: use of closed network connection
	I0420 00:01:07.356963       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.110.233"}
	I0420 00:01:19.724278       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0420 00:01:19.918422       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.4.224"}
	I0420 00:01:24.975394       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0420 00:01:26.044946       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0420 00:01:32.385130       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:01:32.385580       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0420 00:01:32.423825       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:01:32.424686       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0420 00:01:32.424888       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:01:32.454702       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:01:32.455639       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0420 00:01:32.459270       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:01:32.459342       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0420 00:01:33.429402       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0420 00:01:33.460254       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0420 00:01:33.492740       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0420 00:03:41.376620       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.190.149"}
	E0420 00:03:44.681363       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0420 00:03:45.011815       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a] <==
	W0420 00:02:12.099140       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:02:12.099223       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:02:31.566342       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:02:31.566701       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:02:40.480431       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:02:40.480654       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:03:00.006655       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:03:00.006865       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:03:01.080333       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:03:01.080408       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:03:17.289393       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:03:17.289580       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:03:21.477309       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:03:21.477445       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0420 00:03:41.228190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="84.942156ms"
	I0420 00:03:41.242982       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="14.623421ms"
	I0420 00:03:41.243943       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="135.053µs"
	I0420 00:03:41.277128       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="35.841µs"
	I0420 00:03:44.904128       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0420 00:03:44.909873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="6.619µs"
	I0420 00:03:44.913348       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0420 00:03:45.085354       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="17.427892ms"
	I0420 00:03:45.085465       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="54.05µs"
	W0420 00:03:51.858679       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:03:51.858883       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751] <==
	I0419 23:58:18.676567       1 server_linux.go:69] "Using iptables proxy"
	I0419 23:58:18.726017       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.36"]
	I0419 23:58:18.915332       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 23:58:18.915433       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 23:58:18.915451       1 server_linux.go:165] "Using iptables Proxier"
	I0419 23:58:18.923684       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 23:58:18.923879       1 server.go:872] "Version info" version="v1.30.0"
	I0419 23:58:18.923918       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 23:58:18.925192       1 config.go:192] "Starting service config controller"
	I0419 23:58:18.925236       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 23:58:18.925254       1 config.go:101] "Starting endpoint slice config controller"
	I0419 23:58:18.925258       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 23:58:18.925707       1 config.go:319] "Starting node config controller"
	I0419 23:58:18.925742       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 23:58:19.029614       1 shared_informer.go:320] Caches are synced for node config
	I0419 23:58:19.029695       1 shared_informer.go:320] Caches are synced for service config
	I0419 23:58:19.029723       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf] <==
	W0419 23:58:00.654989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0419 23:58:00.655025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0419 23:58:00.655036       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0419 23:58:00.655044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0419 23:58:00.655053       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0419 23:58:00.655060       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0419 23:58:01.464602       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0419 23:58:01.468397       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0419 23:58:01.518796       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0419 23:58:01.520639       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0419 23:58:01.595607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0419 23:58:01.595764       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0419 23:58:01.597827       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0419 23:58:01.597972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0419 23:58:01.621364       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0419 23:58:01.621468       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0419 23:58:01.628015       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0419 23:58:01.628275       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0419 23:58:01.652810       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0419 23:58:01.652908       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0419 23:58:01.736455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0419 23:58:01.736682       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0419 23:58:01.818153       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0419 23:58:01.818208       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 23:58:03.822095       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 20 00:03:41 addons-903502 kubelet[1277]: I0420 00:03:41.224941    1277 memory_manager.go:354] "RemoveStaleState removing state" podUID="80ec45ea-7278-4269-9a2b-95c17a5d8905" containerName="csi-attacher"
	Apr 20 00:03:41 addons-903502 kubelet[1277]: I0420 00:03:41.224971    1277 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e8794fe-4529-45f6-8265-b00805d2c5a6" containerName="hostpath"
	Apr 20 00:03:41 addons-903502 kubelet[1277]: I0420 00:03:41.225001    1277 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e8794fe-4529-45f6-8265-b00805d2c5a6" containerName="node-driver-registrar"
	Apr 20 00:03:41 addons-903502 kubelet[1277]: I0420 00:03:41.225032    1277 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b60c81-8c36-4525-b5fd-6679455e32e8" containerName="gadget"
	Apr 20 00:03:41 addons-903502 kubelet[1277]: I0420 00:03:41.225063    1277 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e8794fe-4529-45f6-8265-b00805d2c5a6" containerName="liveness-probe"
	Apr 20 00:03:41 addons-903502 kubelet[1277]: I0420 00:03:41.225095    1277 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e8794fe-4529-45f6-8265-b00805d2c5a6" containerName="csi-external-health-monitor-controller"
	Apr 20 00:03:41 addons-903502 kubelet[1277]: I0420 00:03:41.225134    1277 memory_manager.go:354] "RemoveStaleState removing state" podUID="c54b0683-814d-4b8a-8af4-3d470408bafd" containerName="task-pv-container"
	Apr 20 00:03:41 addons-903502 kubelet[1277]: I0420 00:03:41.225170    1277 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e8794fe-4529-45f6-8265-b00805d2c5a6" containerName="csi-provisioner"
	Apr 20 00:03:41 addons-903502 kubelet[1277]: I0420 00:03:41.279455    1277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hzkk\" (UniqueName: \"kubernetes.io/projected/35cf4ddc-6019-4f53-9d02-615978016068-kube-api-access-5hzkk\") pod \"hello-world-app-86c47465fc-wkhlc\" (UID: \"35cf4ddc-6019-4f53-9d02-615978016068\") " pod="default/hello-world-app-86c47465fc-wkhlc"
	Apr 20 00:03:41 addons-903502 kubelet[1277]: I0420 00:03:41.279584    1277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/35cf4ddc-6019-4f53-9d02-615978016068-gcp-creds\") pod \"hello-world-app-86c47465fc-wkhlc\" (UID: \"35cf4ddc-6019-4f53-9d02-615978016068\") " pod="default/hello-world-app-86c47465fc-wkhlc"
	Apr 20 00:03:43 addons-903502 kubelet[1277]: I0420 00:03:43.032635    1277 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c833c9ada6406f9f52d78dc87044b61e3309b5e6456392b02078ba1af2aefdd"
	Apr 20 00:03:43 addons-903502 kubelet[1277]: I0420 00:03:43.208054    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j66mb\" (UniqueName: \"kubernetes.io/projected/abc6ceb0-2bb9-4edd-ae34-8021b81671b4-kube-api-access-j66mb\") pod \"abc6ceb0-2bb9-4edd-ae34-8021b81671b4\" (UID: \"abc6ceb0-2bb9-4edd-ae34-8021b81671b4\") "
	Apr 20 00:03:43 addons-903502 kubelet[1277]: I0420 00:03:43.224109    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc6ceb0-2bb9-4edd-ae34-8021b81671b4-kube-api-access-j66mb" (OuterVolumeSpecName: "kube-api-access-j66mb") pod "abc6ceb0-2bb9-4edd-ae34-8021b81671b4" (UID: "abc6ceb0-2bb9-4edd-ae34-8021b81671b4"). InnerVolumeSpecName "kube-api-access-j66mb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 20 00:03:43 addons-903502 kubelet[1277]: I0420 00:03:43.308825    1277 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-j66mb\" (UniqueName: \"kubernetes.io/projected/abc6ceb0-2bb9-4edd-ae34-8021b81671b4-kube-api-access-j66mb\") on node \"addons-903502\" DevicePath \"\""
	Apr 20 00:03:45 addons-903502 kubelet[1277]: I0420 00:03:45.138438    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7499a3ec-d5d2-4dea-8379-3b7e66590849" path="/var/lib/kubelet/pods/7499a3ec-d5d2-4dea-8379-3b7e66590849/volumes"
	Apr 20 00:03:45 addons-903502 kubelet[1277]: I0420 00:03:45.138974    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abc6ceb0-2bb9-4edd-ae34-8021b81671b4" path="/var/lib/kubelet/pods/abc6ceb0-2bb9-4edd-ae34-8021b81671b4/volumes"
	Apr 20 00:03:45 addons-903502 kubelet[1277]: I0420 00:03:45.139436    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de775d00-997b-4e13-a5c6-7c639f3b341b" path="/var/lib/kubelet/pods/de775d00-997b-4e13-a5c6-7c639f3b341b/volumes"
	Apr 20 00:03:48 addons-903502 kubelet[1277]: I0420 00:03:48.147295    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14e9d29b-3218-41d5-ad2a-c451c4fff701-webhook-cert\") pod \"14e9d29b-3218-41d5-ad2a-c451c4fff701\" (UID: \"14e9d29b-3218-41d5-ad2a-c451c4fff701\") "
	Apr 20 00:03:48 addons-903502 kubelet[1277]: I0420 00:03:48.147336    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scsfn\" (UniqueName: \"kubernetes.io/projected/14e9d29b-3218-41d5-ad2a-c451c4fff701-kube-api-access-scsfn\") pod \"14e9d29b-3218-41d5-ad2a-c451c4fff701\" (UID: \"14e9d29b-3218-41d5-ad2a-c451c4fff701\") "
	Apr 20 00:03:48 addons-903502 kubelet[1277]: I0420 00:03:48.152795    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e9d29b-3218-41d5-ad2a-c451c4fff701-kube-api-access-scsfn" (OuterVolumeSpecName: "kube-api-access-scsfn") pod "14e9d29b-3218-41d5-ad2a-c451c4fff701" (UID: "14e9d29b-3218-41d5-ad2a-c451c4fff701"). InnerVolumeSpecName "kube-api-access-scsfn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 20 00:03:48 addons-903502 kubelet[1277]: I0420 00:03:48.156621    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e9d29b-3218-41d5-ad2a-c451c4fff701-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "14e9d29b-3218-41d5-ad2a-c451c4fff701" (UID: "14e9d29b-3218-41d5-ad2a-c451c4fff701"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 20 00:03:48 addons-903502 kubelet[1277]: I0420 00:03:48.248772    1277 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14e9d29b-3218-41d5-ad2a-c451c4fff701-webhook-cert\") on node \"addons-903502\" DevicePath \"\""
	Apr 20 00:03:48 addons-903502 kubelet[1277]: I0420 00:03:48.248806    1277 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-scsfn\" (UniqueName: \"kubernetes.io/projected/14e9d29b-3218-41d5-ad2a-c451c4fff701-kube-api-access-scsfn\") on node \"addons-903502\" DevicePath \"\""
	Apr 20 00:03:49 addons-903502 kubelet[1277]: I0420 00:03:49.097665    1277 scope.go:117] "RemoveContainer" containerID="7689c374b8a7680017b8f0b0fce59b0537cc4f154b19105f356e1b6250c24868"
	Apr 20 00:03:49 addons-903502 kubelet[1277]: I0420 00:03:49.137136    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14e9d29b-3218-41d5-ad2a-c451c4fff701" path="/var/lib/kubelet/pods/14e9d29b-3218-41d5-ad2a-c451c4fff701/volumes"
	
	
	==> storage-provisioner [aa3e4688c82da65059d9a08a942a8551baa3a5acdc3f429353f2be3869643e4d] <==
	I0419 23:58:24.462056       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0419 23:58:24.484235       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0419 23:58:24.484288       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0419 23:58:24.500693       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0419 23:58:24.500871       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-903502_e145b11b-fa71-4df4-9ff9-2a0986c3c296!
	I0419 23:58:24.502760       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4e6e0519-d9e7-4876-adec-39d19fbe23a7", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-903502_e145b11b-fa71-4df4-9ff9-2a0986c3c296 became leader
	I0419 23:58:24.601767       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-903502_e145b11b-fa71-4df4-9ff9-2a0986c3c296!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-903502 -n addons-903502
helpers_test.go:261: (dbg) Run:  kubectl --context addons-903502 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.70s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (322.5s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 5.908078ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-msq6m" [9f348eb7-76f5-4a36-ad8b-50129a6f3ddf] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009082106s
addons_test.go:415: (dbg) Run:  kubectl --context addons-903502 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-903502 top pods -n kube-system: exit status 1 (87.308561ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tjjdl, age: 2m28.593065837s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-903502 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-903502 top pods -n kube-system: exit status 1 (93.49886ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tjjdl, age: 2m32.682773952s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-903502 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-903502 top pods -n kube-system: exit status 1 (80.557702ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tjjdl, age: 2m37.229486423s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-903502 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-903502 top pods -n kube-system: exit status 1 (70.886658ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tjjdl, age: 2m46.27524186s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-903502 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-903502 top pods -n kube-system: exit status 1 (78.548724ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tjjdl, age: 2m54.468470959s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-903502 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-903502 top pods -n kube-system: exit status 1 (87.349152ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tjjdl, age: 3m6.114289063s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-903502 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-903502 top pods -n kube-system: exit status 1 (78.612198ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tjjdl, age: 3m33.333039331s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-903502 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-903502 top pods -n kube-system: exit status 1 (70.507202ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tjjdl, age: 4m10.219432701s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-903502 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-903502 top pods -n kube-system: exit status 1 (81.684847ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tjjdl, age: 4m55.858882358s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-903502 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-903502 top pods -n kube-system: exit status 1 (65.947454ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tjjdl, age: 6m11.060537398s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-903502 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-903502 top pods -n kube-system: exit status 1 (63.786952ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tjjdl, age: 6m54.808755758s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-903502 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-903502 top pods -n kube-system: exit status 1 (64.242986ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-tjjdl, age: 7m43.006770802s

                                                
                                                
** /stderr **
addons_test.go:429: failed checking metric server: exit status 1
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-903502 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-903502 -n addons-903502
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-903502 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-903502 logs -n 25: (1.523956383s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC | 19 Apr 24 23:57 UTC |
	| delete  | -p download-only-740714                                                                     | download-only-740714 | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC | 19 Apr 24 23:57 UTC |
	| delete  | -p download-only-347670                                                                     | download-only-347670 | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC | 19 Apr 24 23:57 UTC |
	| delete  | -p download-only-740714                                                                     | download-only-740714 | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC | 19 Apr 24 23:57 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-466470 | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC |                     |
	|         | binary-mirror-466470                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39973                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-466470                                                                     | binary-mirror-466470 | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC | 19 Apr 24 23:57 UTC |
	| addons  | disable dashboard -p                                                                        | addons-903502        | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC |                     |
	|         | addons-903502                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-903502        | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC |                     |
	|         | addons-903502                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-903502 --wait=true                                                                | addons-903502        | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC | 20 Apr 24 00:00 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-903502 ssh cat                                                                       | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:00 UTC | 20 Apr 24 00:00 UTC |
	|         | /opt/local-path-provisioner/pvc-1c22513e-d65d-44a6-87f2-b75cdb5b79eb_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-903502 addons disable                                                                | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:00 UTC | 20 Apr 24 00:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:00 UTC | 20 Apr 24 00:00 UTC |
	|         | -p addons-903502                                                                            |                      |         |         |                     |                     |
	| ip      | addons-903502 ip                                                                            | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:00 UTC | 20 Apr 24 00:00 UTC |
	| addons  | addons-903502 addons disable                                                                | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:00 UTC | 20 Apr 24 00:00 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-903502 addons disable                                                                | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:00 UTC | 20 Apr 24 00:00 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:01 UTC | 20 Apr 24 00:01 UTC |
	|         | addons-903502                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:01 UTC | 20 Apr 24 00:01 UTC |
	|         | -p addons-903502                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:01 UTC | 20 Apr 24 00:01 UTC |
	|         | addons-903502                                                                               |                      |         |         |                     |                     |
	| addons  | addons-903502 addons                                                                        | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:01 UTC | 20 Apr 24 00:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-903502 ssh curl -s                                                                   | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:01 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-903502 addons                                                                        | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:01 UTC | 20 Apr 24 00:01 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-903502 ip                                                                            | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:03 UTC | 20 Apr 24 00:03 UTC |
	| addons  | addons-903502 addons disable                                                                | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:03 UTC | 20 Apr 24 00:03 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-903502 addons disable                                                                | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:03 UTC | 20 Apr 24 00:03 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-903502 addons                                                                        | addons-903502        | jenkins | v1.33.0 | 20 Apr 24 00:05 UTC | 20 Apr 24 00:05 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 23:57:22
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 23:57:22.839259   84284 out.go:291] Setting OutFile to fd 1 ...
	I0419 23:57:22.839450   84284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 23:57:22.839458   84284 out.go:304] Setting ErrFile to fd 2...
	I0419 23:57:22.839466   84284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 23:57:22.840127   84284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0419 23:57:22.840863   84284 out.go:298] Setting JSON to false
	I0419 23:57:22.841798   84284 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9590,"bootTime":1713561453,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 23:57:22.841862   84284 start.go:139] virtualization: kvm guest
	I0419 23:57:22.844247   84284 out.go:177] * [addons-903502] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 23:57:22.845734   84284 out.go:177]   - MINIKUBE_LOCATION=18703
	I0419 23:57:22.845788   84284 notify.go:220] Checking for updates...
	I0419 23:57:22.847106   84284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 23:57:22.848690   84284 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0419 23:57:22.850185   84284 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0419 23:57:22.851688   84284 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0419 23:57:22.853127   84284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 23:57:22.854616   84284 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 23:57:22.885006   84284 out.go:177] * Using the kvm2 driver based on user configuration
	I0419 23:57:22.886402   84284 start.go:297] selected driver: kvm2
	I0419 23:57:22.886414   84284 start.go:901] validating driver "kvm2" against <nil>
	I0419 23:57:22.886425   84284 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 23:57:22.887075   84284 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 23:57:22.887141   84284 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0419 23:57:22.901412   84284 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0419 23:57:22.901457   84284 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 23:57:22.901652   84284 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 23:57:22.901703   84284 cni.go:84] Creating CNI manager for ""
	I0419 23:57:22.901716   84284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 23:57:22.901722   84284 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 23:57:22.901775   84284 start.go:340] cluster config:
	{Name:addons-903502 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-903502 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0419 23:57:22.901867   84284 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 23:57:22.903520   84284 out.go:177] * Starting "addons-903502" primary control-plane node in "addons-903502" cluster
	I0419 23:57:22.904770   84284 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 23:57:22.904803   84284 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0419 23:57:22.904813   84284 cache.go:56] Caching tarball of preloaded images
	I0419 23:57:22.904874   84284 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0419 23:57:22.904885   84284 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 23:57:22.905168   84284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/config.json ...
	I0419 23:57:22.905186   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/config.json: {Name:mk048214cc8bc5762238f2ad20bad9492a64d565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:22.905301   84284 start.go:360] acquireMachinesLock for addons-903502: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 23:57:22.905370   84284 start.go:364] duration metric: took 42.43µs to acquireMachinesLock for "addons-903502"
	I0419 23:57:22.905391   84284 start.go:93] Provisioning new machine with config: &{Name:addons-903502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-903502 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 23:57:22.905450   84284 start.go:125] createHost starting for "" (driver="kvm2")
	I0419 23:57:22.907019   84284 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0419 23:57:22.907190   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:57:22.907229   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:57:22.920589   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37663
	I0419 23:57:22.921090   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:57:22.921708   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:57:22.921728   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:57:22.922096   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:57:22.922243   84284 main.go:141] libmachine: (addons-903502) Calling .GetMachineName
	I0419 23:57:22.922400   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:57:22.922504   84284 start.go:159] libmachine.API.Create for "addons-903502" (driver="kvm2")
	I0419 23:57:22.922542   84284 client.go:168] LocalClient.Create starting
	I0419 23:57:22.922574   84284 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem
	I0419 23:57:23.148758   84284 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem
	I0419 23:57:23.250414   84284 main.go:141] libmachine: Running pre-create checks...
	I0419 23:57:23.250438   84284 main.go:141] libmachine: (addons-903502) Calling .PreCreateCheck
	I0419 23:57:23.250930   84284 main.go:141] libmachine: (addons-903502) Calling .GetConfigRaw
	I0419 23:57:23.251381   84284 main.go:141] libmachine: Creating machine...
	I0419 23:57:23.251398   84284 main.go:141] libmachine: (addons-903502) Calling .Create
	I0419 23:57:23.251550   84284 main.go:141] libmachine: (addons-903502) Creating KVM machine...
	I0419 23:57:23.252846   84284 main.go:141] libmachine: (addons-903502) DBG | found existing default KVM network
	I0419 23:57:23.253681   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:23.253525   84322 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0419 23:57:23.253736   84284 main.go:141] libmachine: (addons-903502) DBG | created network xml: 
	I0419 23:57:23.253754   84284 main.go:141] libmachine: (addons-903502) DBG | <network>
	I0419 23:57:23.253768   84284 main.go:141] libmachine: (addons-903502) DBG |   <name>mk-addons-903502</name>
	I0419 23:57:23.253774   84284 main.go:141] libmachine: (addons-903502) DBG |   <dns enable='no'/>
	I0419 23:57:23.253780   84284 main.go:141] libmachine: (addons-903502) DBG |   
	I0419 23:57:23.253794   84284 main.go:141] libmachine: (addons-903502) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0419 23:57:23.253813   84284 main.go:141] libmachine: (addons-903502) DBG |     <dhcp>
	I0419 23:57:23.253825   84284 main.go:141] libmachine: (addons-903502) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0419 23:57:23.253831   84284 main.go:141] libmachine: (addons-903502) DBG |     </dhcp>
	I0419 23:57:23.253835   84284 main.go:141] libmachine: (addons-903502) DBG |   </ip>
	I0419 23:57:23.253840   84284 main.go:141] libmachine: (addons-903502) DBG |   
	I0419 23:57:23.253848   84284 main.go:141] libmachine: (addons-903502) DBG | </network>
	I0419 23:57:23.253853   84284 main.go:141] libmachine: (addons-903502) DBG | 
	I0419 23:57:23.258864   84284 main.go:141] libmachine: (addons-903502) DBG | trying to create private KVM network mk-addons-903502 192.168.39.0/24...
	I0419 23:57:23.321019   84284 main.go:141] libmachine: (addons-903502) DBG | private KVM network mk-addons-903502 192.168.39.0/24 created
	I0419 23:57:23.321052   84284 main.go:141] libmachine: (addons-903502) Setting up store path in /home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502 ...
	I0419 23:57:23.321072   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:23.320962   84322 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0419 23:57:23.321111   84284 main.go:141] libmachine: (addons-903502) Building disk image from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0419 23:57:23.321129   84284 main.go:141] libmachine: (addons-903502) Downloading /home/jenkins/minikube-integration/18703-76456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0419 23:57:23.565891   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:23.565746   84322 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa...
	I0419 23:57:23.668632   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:23.668446   84322 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/addons-903502.rawdisk...
	I0419 23:57:23.668672   84284 main.go:141] libmachine: (addons-903502) DBG | Writing magic tar header
	I0419 23:57:23.668695   84284 main.go:141] libmachine: (addons-903502) DBG | Writing SSH key tar header
	I0419 23:57:23.668709   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:23.668659   84322 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502 ...
	I0419 23:57:23.668828   84284 main.go:141] libmachine: (addons-903502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502
	I0419 23:57:23.668851   84284 main.go:141] libmachine: (addons-903502) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502 (perms=drwx------)
	I0419 23:57:23.668859   84284 main.go:141] libmachine: (addons-903502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines
	I0419 23:57:23.668866   84284 main.go:141] libmachine: (addons-903502) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines (perms=drwxr-xr-x)
	I0419 23:57:23.668876   84284 main.go:141] libmachine: (addons-903502) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube (perms=drwxr-xr-x)
	I0419 23:57:23.668882   84284 main.go:141] libmachine: (addons-903502) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456 (perms=drwxrwxr-x)
	I0419 23:57:23.668890   84284 main.go:141] libmachine: (addons-903502) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0419 23:57:23.668898   84284 main.go:141] libmachine: (addons-903502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0419 23:57:23.668904   84284 main.go:141] libmachine: (addons-903502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456
	I0419 23:57:23.668924   84284 main.go:141] libmachine: (addons-903502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0419 23:57:23.668946   84284 main.go:141] libmachine: (addons-903502) DBG | Checking permissions on dir: /home/jenkins
	I0419 23:57:23.668958   84284 main.go:141] libmachine: (addons-903502) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0419 23:57:23.668968   84284 main.go:141] libmachine: (addons-903502) Creating domain...
	I0419 23:57:23.668979   84284 main.go:141] libmachine: (addons-903502) DBG | Checking permissions on dir: /home
	I0419 23:57:23.668991   84284 main.go:141] libmachine: (addons-903502) DBG | Skipping /home - not owner
	I0419 23:57:23.670158   84284 main.go:141] libmachine: (addons-903502) define libvirt domain using xml: 
	I0419 23:57:23.670186   84284 main.go:141] libmachine: (addons-903502) <domain type='kvm'>
	I0419 23:57:23.670205   84284 main.go:141] libmachine: (addons-903502)   <name>addons-903502</name>
	I0419 23:57:23.670232   84284 main.go:141] libmachine: (addons-903502)   <memory unit='MiB'>4000</memory>
	I0419 23:57:23.670246   84284 main.go:141] libmachine: (addons-903502)   <vcpu>2</vcpu>
	I0419 23:57:23.670253   84284 main.go:141] libmachine: (addons-903502)   <features>
	I0419 23:57:23.670269   84284 main.go:141] libmachine: (addons-903502)     <acpi/>
	I0419 23:57:23.670287   84284 main.go:141] libmachine: (addons-903502)     <apic/>
	I0419 23:57:23.670312   84284 main.go:141] libmachine: (addons-903502)     <pae/>
	I0419 23:57:23.670334   84284 main.go:141] libmachine: (addons-903502)     
	I0419 23:57:23.670358   84284 main.go:141] libmachine: (addons-903502)   </features>
	I0419 23:57:23.670377   84284 main.go:141] libmachine: (addons-903502)   <cpu mode='host-passthrough'>
	I0419 23:57:23.670395   84284 main.go:141] libmachine: (addons-903502)   
	I0419 23:57:23.670416   84284 main.go:141] libmachine: (addons-903502)   </cpu>
	I0419 23:57:23.670431   84284 main.go:141] libmachine: (addons-903502)   <os>
	I0419 23:57:23.670444   84284 main.go:141] libmachine: (addons-903502)     <type>hvm</type>
	I0419 23:57:23.670454   84284 main.go:141] libmachine: (addons-903502)     <boot dev='cdrom'/>
	I0419 23:57:23.670466   84284 main.go:141] libmachine: (addons-903502)     <boot dev='hd'/>
	I0419 23:57:23.670480   84284 main.go:141] libmachine: (addons-903502)     <bootmenu enable='no'/>
	I0419 23:57:23.670504   84284 main.go:141] libmachine: (addons-903502)   </os>
	I0419 23:57:23.670518   84284 main.go:141] libmachine: (addons-903502)   <devices>
	I0419 23:57:23.670531   84284 main.go:141] libmachine: (addons-903502)     <disk type='file' device='cdrom'>
	I0419 23:57:23.670561   84284 main.go:141] libmachine: (addons-903502)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/boot2docker.iso'/>
	I0419 23:57:23.670579   84284 main.go:141] libmachine: (addons-903502)       <target dev='hdc' bus='scsi'/>
	I0419 23:57:23.670592   84284 main.go:141] libmachine: (addons-903502)       <readonly/>
	I0419 23:57:23.670603   84284 main.go:141] libmachine: (addons-903502)     </disk>
	I0419 23:57:23.670617   84284 main.go:141] libmachine: (addons-903502)     <disk type='file' device='disk'>
	I0419 23:57:23.670631   84284 main.go:141] libmachine: (addons-903502)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0419 23:57:23.670653   84284 main.go:141] libmachine: (addons-903502)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/addons-903502.rawdisk'/>
	I0419 23:57:23.670667   84284 main.go:141] libmachine: (addons-903502)       <target dev='hda' bus='virtio'/>
	I0419 23:57:23.670676   84284 main.go:141] libmachine: (addons-903502)     </disk>
	I0419 23:57:23.670682   84284 main.go:141] libmachine: (addons-903502)     <interface type='network'>
	I0419 23:57:23.670691   84284 main.go:141] libmachine: (addons-903502)       <source network='mk-addons-903502'/>
	I0419 23:57:23.670696   84284 main.go:141] libmachine: (addons-903502)       <model type='virtio'/>
	I0419 23:57:23.670703   84284 main.go:141] libmachine: (addons-903502)     </interface>
	I0419 23:57:23.670708   84284 main.go:141] libmachine: (addons-903502)     <interface type='network'>
	I0419 23:57:23.670715   84284 main.go:141] libmachine: (addons-903502)       <source network='default'/>
	I0419 23:57:23.670719   84284 main.go:141] libmachine: (addons-903502)       <model type='virtio'/>
	I0419 23:57:23.670725   84284 main.go:141] libmachine: (addons-903502)     </interface>
	I0419 23:57:23.670730   84284 main.go:141] libmachine: (addons-903502)     <serial type='pty'>
	I0419 23:57:23.670738   84284 main.go:141] libmachine: (addons-903502)       <target port='0'/>
	I0419 23:57:23.670742   84284 main.go:141] libmachine: (addons-903502)     </serial>
	I0419 23:57:23.670750   84284 main.go:141] libmachine: (addons-903502)     <console type='pty'>
	I0419 23:57:23.670760   84284 main.go:141] libmachine: (addons-903502)       <target type='serial' port='0'/>
	I0419 23:57:23.670768   84284 main.go:141] libmachine: (addons-903502)     </console>
	I0419 23:57:23.670772   84284 main.go:141] libmachine: (addons-903502)     <rng model='virtio'>
	I0419 23:57:23.670781   84284 main.go:141] libmachine: (addons-903502)       <backend model='random'>/dev/random</backend>
	I0419 23:57:23.670785   84284 main.go:141] libmachine: (addons-903502)     </rng>
	I0419 23:57:23.670790   84284 main.go:141] libmachine: (addons-903502)     
	I0419 23:57:23.670796   84284 main.go:141] libmachine: (addons-903502)     
	I0419 23:57:23.670804   84284 main.go:141] libmachine: (addons-903502)   </devices>
	I0419 23:57:23.670809   84284 main.go:141] libmachine: (addons-903502) </domain>
	I0419 23:57:23.670819   84284 main.go:141] libmachine: (addons-903502) 
	I0419 23:57:23.675187   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:f5:e9:64 in network default
	I0419 23:57:23.675718   84284 main.go:141] libmachine: (addons-903502) Ensuring networks are active...
	I0419 23:57:23.675760   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:23.676396   84284 main.go:141] libmachine: (addons-903502) Ensuring network default is active
	I0419 23:57:23.676772   84284 main.go:141] libmachine: (addons-903502) Ensuring network mk-addons-903502 is active
	I0419 23:57:23.677194   84284 main.go:141] libmachine: (addons-903502) Getting domain xml...
	I0419 23:57:23.677850   84284 main.go:141] libmachine: (addons-903502) Creating domain...
	I0419 23:57:24.844776   84284 main.go:141] libmachine: (addons-903502) Waiting to get IP...
	I0419 23:57:24.845816   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:24.846189   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:24.846215   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:24.846169   84322 retry.go:31] will retry after 240.816363ms: waiting for machine to come up
	I0419 23:57:25.088865   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:25.089383   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:25.089431   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:25.089345   84322 retry.go:31] will retry after 283.575672ms: waiting for machine to come up
	I0419 23:57:25.374846   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:25.375271   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:25.375297   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:25.375216   84322 retry.go:31] will retry after 425.312228ms: waiting for machine to come up
	I0419 23:57:25.801682   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:25.802054   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:25.802076   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:25.802009   84322 retry.go:31] will retry after 407.959354ms: waiting for machine to come up
	I0419 23:57:26.211491   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:26.211946   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:26.211977   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:26.211906   84322 retry.go:31] will retry after 680.332989ms: waiting for machine to come up
	I0419 23:57:26.893729   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:26.894144   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:26.894177   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:26.894101   84322 retry.go:31] will retry after 574.715983ms: waiting for machine to come up
	I0419 23:57:27.471195   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:27.471737   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:27.471763   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:27.471660   84322 retry.go:31] will retry after 1.018392314s: waiting for machine to come up
	I0419 23:57:28.491524   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:28.491978   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:28.492012   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:28.491918   84322 retry.go:31] will retry after 1.121833343s: waiting for machine to come up
	I0419 23:57:29.615143   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:29.615514   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:29.615543   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:29.615455   84322 retry.go:31] will retry after 1.797582766s: waiting for machine to come up
	I0419 23:57:31.415437   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:31.415822   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:31.415857   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:31.415780   84322 retry.go:31] will retry after 1.441079659s: waiting for machine to come up
	I0419 23:57:32.857975   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:32.858423   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:32.858453   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:32.858373   84322 retry.go:31] will retry after 1.808645557s: waiting for machine to come up
	I0419 23:57:34.669892   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:34.670355   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:34.670388   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:34.670293   84322 retry.go:31] will retry after 2.46773113s: waiting for machine to come up
	I0419 23:57:37.141143   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:37.141677   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:37.141698   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:37.141638   84322 retry.go:31] will retry after 3.530647149s: waiting for machine to come up
	I0419 23:57:40.675702   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:40.676074   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find current IP address of domain addons-903502 in network mk-addons-903502
	I0419 23:57:40.676103   84284 main.go:141] libmachine: (addons-903502) DBG | I0419 23:57:40.676029   84322 retry.go:31] will retry after 4.808012141s: waiting for machine to come up
	I0419 23:57:45.486900   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.487349   84284 main.go:141] libmachine: (addons-903502) Found IP for machine: 192.168.39.36
	I0419 23:57:45.487374   84284 main.go:141] libmachine: (addons-903502) Reserving static IP address...
	I0419 23:57:45.487412   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has current primary IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.487704   84284 main.go:141] libmachine: (addons-903502) DBG | unable to find host DHCP lease matching {name: "addons-903502", mac: "52:54:00:6a:a2:50", ip: "192.168.39.36"} in network mk-addons-903502
	I0419 23:57:45.558780   84284 main.go:141] libmachine: (addons-903502) DBG | Getting to WaitForSSH function...
	I0419 23:57:45.558814   84284 main.go:141] libmachine: (addons-903502) Reserved static IP address: 192.168.39.36
	I0419 23:57:45.558830   84284 main.go:141] libmachine: (addons-903502) Waiting for SSH to be available...
	I0419 23:57:45.561611   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.562153   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:45.562180   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.562377   84284 main.go:141] libmachine: (addons-903502) DBG | Using SSH client type: external
	I0419 23:57:45.562418   84284 main.go:141] libmachine: (addons-903502) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa (-rw-------)
	I0419 23:57:45.562464   84284 main.go:141] libmachine: (addons-903502) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0419 23:57:45.562486   84284 main.go:141] libmachine: (addons-903502) DBG | About to run SSH command:
	I0419 23:57:45.562503   84284 main.go:141] libmachine: (addons-903502) DBG | exit 0
	I0419 23:57:45.685138   84284 main.go:141] libmachine: (addons-903502) DBG | SSH cmd err, output: <nil>: 
	I0419 23:57:45.685384   84284 main.go:141] libmachine: (addons-903502) KVM machine creation complete!
	I0419 23:57:45.685693   84284 main.go:141] libmachine: (addons-903502) Calling .GetConfigRaw
	I0419 23:57:45.686226   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:57:45.686437   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:57:45.686610   84284 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0419 23:57:45.686630   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:57:45.687907   84284 main.go:141] libmachine: Detecting operating system of created instance...
	I0419 23:57:45.687952   84284 main.go:141] libmachine: Waiting for SSH to be available...
	I0419 23:57:45.687964   84284 main.go:141] libmachine: Getting to WaitForSSH function...
	I0419 23:57:45.687973   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:45.690289   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.690640   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:45.690662   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.690784   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:45.690970   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:45.691137   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:45.691255   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:45.691453   84284 main.go:141] libmachine: Using SSH client type: native
	I0419 23:57:45.691640   84284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0419 23:57:45.691651   84284 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0419 23:57:45.788819   84284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 23:57:45.788840   84284 main.go:141] libmachine: Detecting the provisioner...
	I0419 23:57:45.788847   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:45.791762   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.792106   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:45.792140   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.792268   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:45.792475   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:45.792624   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:45.792765   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:45.792921   84284 main.go:141] libmachine: Using SSH client type: native
	I0419 23:57:45.793074   84284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0419 23:57:45.793084   84284 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0419 23:57:45.890376   84284 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0419 23:57:45.890481   84284 main.go:141] libmachine: found compatible host: buildroot
	I0419 23:57:45.890491   84284 main.go:141] libmachine: Provisioning with buildroot...
	I0419 23:57:45.890500   84284 main.go:141] libmachine: (addons-903502) Calling .GetMachineName
	I0419 23:57:45.890771   84284 buildroot.go:166] provisioning hostname "addons-903502"
	I0419 23:57:45.890796   84284 main.go:141] libmachine: (addons-903502) Calling .GetMachineName
	I0419 23:57:45.891003   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:45.893656   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.894063   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:45.894112   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:45.894267   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:45.894436   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:45.894599   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:45.894786   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:45.894936   84284 main.go:141] libmachine: Using SSH client type: native
	I0419 23:57:45.895129   84284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0419 23:57:45.895145   84284 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-903502 && echo "addons-903502" | sudo tee /etc/hostname
	I0419 23:57:46.007479   84284 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-903502
	
	I0419 23:57:46.007515   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:46.010231   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.010583   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.010606   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.010817   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:46.011021   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.011195   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.011356   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:46.011548   84284 main.go:141] libmachine: Using SSH client type: native
	I0419 23:57:46.011716   84284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0419 23:57:46.011731   84284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-903502' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-903502/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-903502' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 23:57:46.118975   84284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 23:57:46.119000   84284 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0419 23:57:46.119018   84284 buildroot.go:174] setting up certificates
	I0419 23:57:46.119027   84284 provision.go:84] configureAuth start
	I0419 23:57:46.119035   84284 main.go:141] libmachine: (addons-903502) Calling .GetMachineName
	I0419 23:57:46.119325   84284 main.go:141] libmachine: (addons-903502) Calling .GetIP
	I0419 23:57:46.122046   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.122448   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.122484   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.122609   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:46.124832   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.125190   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.125220   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.125463   84284 provision.go:143] copyHostCerts
	I0419 23:57:46.125572   84284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0419 23:57:46.125764   84284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0419 23:57:46.125890   84284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0419 23:57:46.125985   84284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.addons-903502 san=[127.0.0.1 192.168.39.36 addons-903502 localhost minikube]
	I0419 23:57:46.214917   84284 provision.go:177] copyRemoteCerts
	I0419 23:57:46.214978   84284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 23:57:46.215002   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:46.217813   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.218117   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.218149   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.218301   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:46.218504   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.218642   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:46.218781   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:57:46.296979   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 23:57:46.322240   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0419 23:57:46.346920   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0419 23:57:46.371692   84284 provision.go:87] duration metric: took 252.65329ms to configureAuth
	I0419 23:57:46.371713   84284 buildroot.go:189] setting minikube options for container-runtime
	I0419 23:57:46.371896   84284 config.go:182] Loaded profile config "addons-903502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 23:57:46.371997   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:46.374816   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.375130   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.375201   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.375309   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:46.375535   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.375704   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.375945   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:46.376119   84284 main.go:141] libmachine: Using SSH client type: native
	I0419 23:57:46.376309   84284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0419 23:57:46.376328   84284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 23:57:46.639600   84284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 23:57:46.639630   84284 main.go:141] libmachine: Checking connection to Docker...
	I0419 23:57:46.639704   84284 main.go:141] libmachine: (addons-903502) Calling .GetURL
	I0419 23:57:46.641174   84284 main.go:141] libmachine: (addons-903502) DBG | Using libvirt version 6000000
	I0419 23:57:46.643979   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.644330   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.644352   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.644511   84284 main.go:141] libmachine: Docker is up and running!
	I0419 23:57:46.644523   84284 main.go:141] libmachine: Reticulating splines...
	I0419 23:57:46.644531   84284 client.go:171] duration metric: took 23.721978225s to LocalClient.Create
	I0419 23:57:46.644558   84284 start.go:167] duration metric: took 23.722053862s to libmachine.API.Create "addons-903502"
	I0419 23:57:46.644577   84284 start.go:293] postStartSetup for "addons-903502" (driver="kvm2")
	I0419 23:57:46.644587   84284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 23:57:46.644604   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:57:46.644868   84284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 23:57:46.644898   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:46.647177   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.647468   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.647493   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.647665   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:46.647832   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.647984   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:46.648098   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:57:46.728809   84284 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 23:57:46.733520   84284 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 23:57:46.733544   84284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0419 23:57:46.733597   84284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0419 23:57:46.733615   84284 start.go:296] duration metric: took 89.033019ms for postStartSetup
	I0419 23:57:46.733653   84284 main.go:141] libmachine: (addons-903502) Calling .GetConfigRaw
	I0419 23:57:46.734243   84284 main.go:141] libmachine: (addons-903502) Calling .GetIP
	I0419 23:57:46.736755   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.737208   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.737237   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.737484   84284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/config.json ...
	I0419 23:57:46.737648   84284 start.go:128] duration metric: took 23.832188673s to createHost
	I0419 23:57:46.737669   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:46.739704   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.740025   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.740061   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.740164   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:46.740326   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.740461   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.740614   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:46.740768   84284 main.go:141] libmachine: Using SSH client type: native
	I0419 23:57:46.740915   84284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0419 23:57:46.740926   84284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 23:57:46.838157   84284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713571066.808343499
	
	I0419 23:57:46.838180   84284 fix.go:216] guest clock: 1713571066.808343499
	I0419 23:57:46.838189   84284 fix.go:229] Guest: 2024-04-19 23:57:46.808343499 +0000 UTC Remote: 2024-04-19 23:57:46.737658804 +0000 UTC m=+23.944466384 (delta=70.684695ms)
	I0419 23:57:46.838239   84284 fix.go:200] guest clock delta is within tolerance: 70.684695ms
	I0419 23:57:46.838256   84284 start.go:83] releasing machines lock for "addons-903502", held for 23.932863676s
	I0419 23:57:46.838281   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:57:46.838551   84284 main.go:141] libmachine: (addons-903502) Calling .GetIP
	I0419 23:57:46.841103   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.841430   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.841461   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.841601   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:57:46.842079   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:57:46.842255   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:57:46.842385   84284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 23:57:46.842431   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:46.842476   84284 ssh_runner.go:195] Run: cat /version.json
	I0419 23:57:46.842499   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:57:46.845045   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.845410   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.845442   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.845556   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:46.845618   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.845741   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.845894   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:46.845968   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:46.845992   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:46.846071   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:57:46.846118   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:57:46.846266   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:57:46.846442   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:57:46.846586   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:57:46.918231   84284 ssh_runner.go:195] Run: systemctl --version
	I0419 23:57:46.977113   84284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 23:57:47.141207   84284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 23:57:47.148596   84284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 23:57:47.148654   84284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 23:57:47.166822   84284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 23:57:47.166847   84284 start.go:494] detecting cgroup driver to use...
	I0419 23:57:47.166896   84284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 23:57:47.187854   84284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 23:57:47.206086   84284 docker.go:217] disabling cri-docker service (if available) ...
	I0419 23:57:47.206129   84284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 23:57:47.223019   84284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 23:57:47.238003   84284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 23:57:47.362643   84284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 23:57:47.494115   84284 docker.go:233] disabling docker service ...
	I0419 23:57:47.494193   84284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 23:57:47.508909   84284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 23:57:47.523235   84284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 23:57:47.666908   84284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 23:57:47.788905   84284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 23:57:47.804092   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 23:57:47.824973   84284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 23:57:47.825038   84284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 23:57:47.836688   84284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 23:57:47.836746   84284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 23:57:47.848601   84284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 23:57:47.862956   84284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 23:57:47.877355   84284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 23:57:47.892878   84284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 23:57:47.907307   84284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 23:57:47.928116   84284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 23:57:47.942287   84284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 23:57:47.955256   84284 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0419 23:57:47.955308   84284 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0419 23:57:47.973724   84284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 23:57:47.996587   84284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 23:57:48.134712   84284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 23:57:48.280505   84284 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 23:57:48.280591   84284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 23:57:48.286116   84284 start.go:562] Will wait 60s for crictl version
	I0419 23:57:48.286189   84284 ssh_runner.go:195] Run: which crictl
	I0419 23:57:48.290183   84284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 23:57:48.328749   84284 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 23:57:48.328862   84284 ssh_runner.go:195] Run: crio --version
	I0419 23:57:48.358701   84284 ssh_runner.go:195] Run: crio --version
	I0419 23:57:48.389622   84284 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 23:57:48.390919   84284 main.go:141] libmachine: (addons-903502) Calling .GetIP
	I0419 23:57:48.393703   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:48.394091   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:57:48.394114   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:57:48.394320   84284 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0419 23:57:48.398975   84284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 23:57:48.414988   84284 kubeadm.go:877] updating cluster {Name:addons-903502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-903502 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 23:57:48.415118   84284 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 23:57:48.415165   84284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 23:57:48.453153   84284 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0419 23:57:48.453242   84284 ssh_runner.go:195] Run: which lz4
	I0419 23:57:48.457649   84284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0419 23:57:48.462374   84284 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0419 23:57:48.462396   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0419 23:57:50.034281   84284 crio.go:462] duration metric: took 1.576677218s to copy over tarball
	I0419 23:57:50.034368   84284 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0419 23:57:52.494020   84284 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.459608363s)
	I0419 23:57:52.494053   84284 crio.go:469] duration metric: took 2.459736027s to extract the tarball
	I0419 23:57:52.494067   84284 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0419 23:57:52.533083   84284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 23:57:52.576970   84284 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 23:57:52.576998   84284 cache_images.go:84] Images are preloaded, skipping loading
	I0419 23:57:52.577014   84284 kubeadm.go:928] updating node { 192.168.39.36 8443 v1.30.0 crio true true} ...
	I0419 23:57:52.577180   84284 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-903502 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-903502 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 23:57:52.577249   84284 ssh_runner.go:195] Run: crio config
	I0419 23:57:52.623878   84284 cni.go:84] Creating CNI manager for ""
	I0419 23:57:52.623904   84284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 23:57:52.623920   84284 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 23:57:52.623942   84284 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-903502 NodeName:addons-903502 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 23:57:52.624098   84284 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-903502"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 23:57:52.624160   84284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 23:57:52.634902   84284 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 23:57:52.634963   84284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0419 23:57:52.645324   84284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0419 23:57:52.664775   84284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 23:57:52.682779   84284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0419 23:57:52.701117   84284 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I0419 23:57:52.705838   84284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 23:57:52.719762   84284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 23:57:52.842826   84284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 23:57:52.861046   84284 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502 for IP: 192.168.39.36
	I0419 23:57:52.861073   84284 certs.go:194] generating shared ca certs ...
	I0419 23:57:52.861093   84284 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:52.861257   84284 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0419 23:57:52.968483   84284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt ...
	I0419 23:57:52.968510   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt: {Name:mk3e7941e28c54cac53c4989f2f18b35b315eb8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:52.968667   84284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key ...
	I0419 23:57:52.968678   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key: {Name:mk55446d928fb96f6a08651efbd5210423732b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:52.968748   84284 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0419 23:57:53.281329   84284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt ...
	I0419 23:57:53.281364   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt: {Name:mk01bba80ee303a40e5842406ec49b102a0f4de3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:53.281541   84284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key ...
	I0419 23:57:53.281556   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key: {Name:mk685d994480bdb16a26b8b4354904f3d219044d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:53.281664   84284 certs.go:256] generating profile certs ...
	I0419 23:57:53.281724   84284 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.key
	I0419 23:57:53.281741   84284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt with IP's: []
	I0419 23:57:53.466222   84284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt ...
	I0419 23:57:53.466258   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: {Name:mkf4d8cb8884cf8b66721cc1da8dcafd60ee33d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:53.466422   84284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.key ...
	I0419 23:57:53.466433   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.key: {Name:mk9dfadc37c3da14550d2574628348d276523fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:53.466501   84284 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.key.bbc991ce
	I0419 23:57:53.466518   84284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.crt.bbc991ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.36]
	I0419 23:57:53.547735   84284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.crt.bbc991ce ...
	I0419 23:57:53.547770   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.crt.bbc991ce: {Name:mkc1ce65e23fe7bc0321991d4aa57384f5061964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:53.547931   84284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.key.bbc991ce ...
	I0419 23:57:53.547945   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.key.bbc991ce: {Name:mkf41bb88da9613c623f61d1bd00af8cbb18fa53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:53.548012   84284 certs.go:381] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.crt.bbc991ce -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.crt
	I0419 23:57:53.548106   84284 certs.go:385] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.key.bbc991ce -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.key
	I0419 23:57:53.548158   84284 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/proxy-client.key
	I0419 23:57:53.548176   84284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/proxy-client.crt with IP's: []
	I0419 23:57:53.629671   84284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/proxy-client.crt ...
	I0419 23:57:53.629711   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/proxy-client.crt: {Name:mk44855f25552262df405e97b9728f6df6a04fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:53.629902   84284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/proxy-client.key ...
	I0419 23:57:53.629916   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/proxy-client.key: {Name:mkbda7245c6475c666ba0dd184a96313960f3bb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:53.630116   84284 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0419 23:57:53.630155   84284 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0419 23:57:53.630181   84284 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0419 23:57:53.630208   84284 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0419 23:57:53.630848   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 23:57:53.664786   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0419 23:57:53.713247   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 23:57:53.742720   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 23:57:53.771611   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0419 23:57:53.799826   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0419 23:57:53.828725   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 23:57:53.856838   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0419 23:57:53.885159   84284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 23:57:53.913809   84284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0419 23:57:53.933377   84284 ssh_runner.go:195] Run: openssl version
	I0419 23:57:53.940096   84284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 23:57:53.952818   84284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 23:57:53.958299   84284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0419 23:57:53.958381   84284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 23:57:53.965328   84284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 23:57:53.978050   84284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 23:57:53.982952   84284 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 23:57:53.983040   84284 kubeadm.go:391] StartCluster: {Name:addons-903502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-903502 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 23:57:53.983129   84284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0419 23:57:53.983211   84284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0419 23:57:54.027113   84284 cri.go:89] found id: ""
	I0419 23:57:54.027187   84284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0419 23:57:54.038615   84284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 23:57:54.049532   84284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 23:57:54.060740   84284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 23:57:54.060761   84284 kubeadm.go:156] found existing configuration files:
	
	I0419 23:57:54.060818   84284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0419 23:57:54.071603   84284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 23:57:54.071671   84284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 23:57:54.084321   84284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0419 23:57:54.095721   84284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 23:57:54.095779   84284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 23:57:54.105780   84284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0419 23:57:54.116316   84284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 23:57:54.116374   84284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 23:57:54.126569   84284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0419 23:57:54.136431   84284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 23:57:54.136489   84284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 23:57:54.146717   84284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0419 23:57:54.204386   84284 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0419 23:57:54.204473   84284 kubeadm.go:309] [preflight] Running pre-flight checks
	I0419 23:57:54.346746   84284 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0419 23:57:54.346938   84284 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0419 23:57:54.347112   84284 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0419 23:57:54.613352   84284 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 23:57:54.713349   84284 out.go:204]   - Generating certificates and keys ...
	I0419 23:57:54.713488   84284 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0419 23:57:54.713605   84284 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0419 23:57:54.748316   84284 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0419 23:57:54.796222   84284 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0419 23:57:54.872233   84284 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0419 23:57:54.997599   84284 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0419 23:57:55.288948   84284 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0419 23:57:55.290846   84284 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-903502 localhost] and IPs [192.168.39.36 127.0.0.1 ::1]
	I0419 23:57:55.426320   84284 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0419 23:57:55.426627   84284 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-903502 localhost] and IPs [192.168.39.36 127.0.0.1 ::1]
	I0419 23:57:55.585716   84284 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0419 23:57:55.842262   84284 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0419 23:57:56.063757   84284 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0419 23:57:56.063858   84284 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 23:57:56.230739   84284 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0419 23:57:56.297623   84284 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0419 23:57:56.379194   84284 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0419 23:57:56.545827   84284 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 23:57:56.725283   84284 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 23:57:56.725989   84284 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 23:57:56.728397   84284 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 23:57:56.730747   84284 out.go:204]   - Booting up control plane ...
	I0419 23:57:56.730858   84284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 23:57:56.730946   84284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 23:57:56.732254   84284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 23:57:56.748507   84284 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 23:57:56.749085   84284 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 23:57:56.749156   84284 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0419 23:57:56.882141   84284 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0419 23:57:56.882266   84284 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0419 23:57:57.884238   84284 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.003081997s
	I0419 23:57:57.884317   84284 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0419 23:58:02.383206   84284 kubeadm.go:309] [api-check] The API server is healthy after 4.501707364s
	I0419 23:58:02.396410   84284 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0419 23:58:02.413687   84284 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0419 23:58:02.443398   84284 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0419 23:58:02.443588   84284 kubeadm.go:309] [mark-control-plane] Marking the node addons-903502 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0419 23:58:02.459024   84284 kubeadm.go:309] [bootstrap-token] Using token: xhm5bp.g0g44g1zazvpep10
	I0419 23:58:02.460719   84284 out.go:204]   - Configuring RBAC rules ...
	I0419 23:58:02.460874   84284 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0419 23:58:02.464615   84284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0419 23:58:02.474749   84284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0419 23:58:02.477727   84284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0419 23:58:02.480906   84284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0419 23:58:02.484362   84284 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0419 23:58:02.790036   84284 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0419 23:58:03.227734   84284 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0419 23:58:03.789998   84284 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0419 23:58:03.790023   84284 kubeadm.go:309] 
	I0419 23:58:03.790095   84284 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0419 23:58:03.790107   84284 kubeadm.go:309] 
	I0419 23:58:03.790191   84284 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0419 23:58:03.790235   84284 kubeadm.go:309] 
	I0419 23:58:03.790298   84284 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0419 23:58:03.790385   84284 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0419 23:58:03.790435   84284 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0419 23:58:03.790442   84284 kubeadm.go:309] 
	I0419 23:58:03.790509   84284 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0419 23:58:03.790522   84284 kubeadm.go:309] 
	I0419 23:58:03.790585   84284 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0419 23:58:03.790597   84284 kubeadm.go:309] 
	I0419 23:58:03.790674   84284 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0419 23:58:03.790780   84284 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0419 23:58:03.790875   84284 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0419 23:58:03.790884   84284 kubeadm.go:309] 
	I0419 23:58:03.790956   84284 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0419 23:58:03.791064   84284 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0419 23:58:03.791077   84284 kubeadm.go:309] 
	I0419 23:58:03.791217   84284 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token xhm5bp.g0g44g1zazvpep10 \
	I0419 23:58:03.791386   84284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0419 23:58:03.791427   84284 kubeadm.go:309] 	--control-plane 
	I0419 23:58:03.791442   84284 kubeadm.go:309] 
	I0419 23:58:03.791547   84284 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0419 23:58:03.791566   84284 kubeadm.go:309] 
	I0419 23:58:03.791678   84284 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token xhm5bp.g0g44g1zazvpep10 \
	I0419 23:58:03.791845   84284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0419 23:58:03.792028   84284 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 23:58:03.792055   84284 cni.go:84] Creating CNI manager for ""
	I0419 23:58:03.792068   84284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 23:58:03.793858   84284 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0419 23:58:03.795183   84284 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0419 23:58:03.810534   84284 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0419 23:58:03.830216   84284 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0419 23:58:03.830313   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:03.830367   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-903502 minikube.k8s.io/updated_at=2024_04_19T23_58_03_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=addons-903502 minikube.k8s.io/primary=true
	I0419 23:58:03.857965   84284 ops.go:34] apiserver oom_adj: -16
	I0419 23:58:03.933556   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:04.434036   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:04.934206   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:05.434382   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:05.933664   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:06.434564   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:06.933679   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:07.433774   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:07.933843   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:08.434389   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:08.933787   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:09.434361   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:09.933585   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:10.433742   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:10.933513   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:11.433944   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:11.934271   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:12.433788   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:12.933658   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:13.433940   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:13.933567   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:14.434608   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:14.934412   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:15.434412   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:15.934310   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:16.434606   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:16.934279   84284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 23:58:17.053917   84284 kubeadm.go:1107] duration metric: took 13.223679997s to wait for elevateKubeSystemPrivileges
	W0419 23:58:17.053999   84284 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0419 23:58:17.054013   84284 kubeadm.go:393] duration metric: took 23.070981538s to StartCluster
	I0419 23:58:17.054039   84284 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:58:17.054190   84284 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0419 23:58:17.054590   84284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:58:17.054822   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0419 23:58:17.054890   84284 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 23:58:17.056958   84284 out.go:177] * Verifying Kubernetes components...
	I0419 23:58:17.055021   84284 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0419 23:58:17.055107   84284 config.go:182] Loaded profile config "addons-903502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 23:58:17.058315   84284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 23:58:17.058328   84284 addons.go:69] Setting cloud-spanner=true in profile "addons-903502"
	I0419 23:58:17.058344   84284 addons.go:69] Setting yakd=true in profile "addons-903502"
	I0419 23:58:17.058350   84284 addons.go:69] Setting gcp-auth=true in profile "addons-903502"
	I0419 23:58:17.058380   84284 addons.go:69] Setting default-storageclass=true in profile "addons-903502"
	I0419 23:58:17.058399   84284 addons.go:69] Setting ingress-dns=true in profile "addons-903502"
	I0419 23:58:17.058404   84284 addons.go:234] Setting addon yakd=true in "addons-903502"
	I0419 23:58:17.058405   84284 addons.go:69] Setting metrics-server=true in profile "addons-903502"
	I0419 23:58:17.058415   84284 addons.go:69] Setting storage-provisioner=true in profile "addons-903502"
	I0419 23:58:17.058421   84284 mustload.go:65] Loading cluster: addons-903502
	I0419 23:58:17.058400   84284 addons.go:69] Setting inspektor-gadget=true in profile "addons-903502"
	I0419 23:58:17.058430   84284 addons.go:234] Setting addon metrics-server=true in "addons-903502"
	I0419 23:58:17.058433   84284 addons.go:234] Setting addon storage-provisioner=true in "addons-903502"
	I0419 23:58:17.058437   84284 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-903502"
	I0419 23:58:17.058448   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.058454   84284 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-903502"
	I0419 23:58:17.058454   84284 addons.go:69] Setting ingress=true in profile "addons-903502"
	I0419 23:58:17.058464   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.058474   84284 addons.go:234] Setting addon ingress=true in "addons-903502"
	I0419 23:58:17.058492   84284 addons.go:69] Setting registry=true in profile "addons-903502"
	I0419 23:58:17.058508   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.058511   84284 addons.go:234] Setting addon registry=true in "addons-903502"
	I0419 23:58:17.058531   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.058602   84284 config.go:182] Loaded profile config "addons-903502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 23:58:17.058386   84284 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-903502"
	I0419 23:58:17.058915   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.058924   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.058928   84284 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-903502"
	I0419 23:58:17.058344   84284 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-903502"
	I0419 23:58:17.058941   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.058945   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.058464   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.058963   84284 addons.go:69] Setting volumesnapshots=true in profile "addons-903502"
	I0419 23:58:17.058969   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.058972   84284 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-903502"
	I0419 23:58:17.058979   84284 addons.go:234] Setting addon volumesnapshots=true in "addons-903502"
	I0419 23:58:17.058988   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.058930   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059008   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.058947   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059011   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059024   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.059037   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.058404   84284 addons.go:234] Setting addon cloud-spanner=true in "addons-903502"
	I0419 23:58:17.058449   84284 addons.go:234] Setting addon inspektor-gadget=true in "addons-903502"
	I0419 23:58:17.058335   84284 addons.go:69] Setting helm-tiller=true in profile "addons-903502"
	I0419 23:58:17.059055   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.059072   84284 addons.go:234] Setting addon helm-tiller=true in "addons-903502"
	I0419 23:58:17.058425   84284 addons.go:234] Setting addon ingress-dns=true in "addons-903502"
	I0419 23:58:17.058953   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.059098   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.059264   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059283   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.058421   84284 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-903502"
	I0419 23:58:17.059379   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059405   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.059429   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059455   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.059468   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.059495   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.059594   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059615   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.059628   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.059678   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059711   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.059779   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.059863   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059893   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.059907   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.059928   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.079704   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38715
	I0419 23:58:17.080258   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.080928   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.080968   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.081587   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.081665   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35387
	I0419 23:58:17.081763   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.081807   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.081802   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.081850   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.082119   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.082310   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.082356   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.082933   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.082966   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.083310   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.083863   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.083893   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.094580   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0419 23:58:17.094616   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37935
	I0419 23:58:17.094886   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I0419 23:58:17.095159   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.095276   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.095642   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.095663   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.095810   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.095822   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.096026   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.096565   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.096605   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.096864   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.097488   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.097527   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.101688   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.105916   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.105938   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.106366   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.106957   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.106999   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.111463   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35193
	I0419 23:58:17.111673   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41977
	I0419 23:58:17.112000   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.113368   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I0419 23:58:17.113882   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.114445   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.114466   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.114888   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.115495   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.115532   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.115977   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40367
	I0419 23:58:17.116441   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.117003   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.117021   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.117426   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.117617   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.119625   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.120386   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.120407   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.120886   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.121641   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.121692   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.122030   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36375
	I0419 23:58:17.122037   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.122054   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.122472   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.122472   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.122735   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.122932   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.123069   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.123087   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.123551   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.123840   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.123924   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.124015   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.125851   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I0419 23:58:17.126590   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.126781   84284 addons.go:234] Setting addon default-storageclass=true in "addons-903502"
	I0419 23:58:17.126831   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.127182   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.127213   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.127299   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.127315   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.127657   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.127832   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.128114   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I0419 23:58:17.128777   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.129296   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.129326   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.129487   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.129669   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.129721   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.132295   84284 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0419 23:58:17.130321   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.130385   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42511
	I0419 23:58:17.131123   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I0419 23:58:17.131304   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I0419 23:58:17.132192   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39633
	I0419 23:58:17.133996   84284 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0419 23:58:17.134380   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.135362   84284 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0419 23:58:17.135676   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0419 23:58:17.137334   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.137349   84284 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0419 23:58:17.137364   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0419 23:58:17.137381   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.136133   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.137407   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.136341   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.136348   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.136384   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.138000   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.138022   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.138031   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.138238   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.138751   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.138769   84284 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-903502"
	I0419 23:58:17.138821   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:17.139064   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.139082   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.139210   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.139247   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.139506   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.139555   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.139920   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.140001   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.141850   84284 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0419 23:58:17.140683   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.140838   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.142270   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.143139   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.143178   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.144808   84284 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0419 23:58:17.143376   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.142978   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.143708   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I0419 23:58:17.145103   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.147317   84284 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0419 23:58:17.148778   84284 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 23:58:17.147237   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.147414   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.146992   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I0419 23:58:17.147432   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.147650   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38801
	I0419 23:58:17.148537   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36935
	I0419 23:58:17.148926   84284 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0419 23:58:17.150075   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0419 23:58:17.150098   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.150236   84284 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 23:58:17.150245   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0419 23:58:17.150260   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.151122   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.151466   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.151554   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.151625   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.151661   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.151703   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.151723   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.151976   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.152000   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.151999   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.152322   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.152379   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.152614   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.152637   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.152651   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.152753   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.153502   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.153514   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45467
	I0419 23:58:17.153546   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.153611   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.153627   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.153661   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.153694   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.153835   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.153987   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.154117   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.154184   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.154206   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.154264   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.154278   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.154345   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.154359   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.154662   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.154849   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.155229   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.155250   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.155554   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.155729   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.155907   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.158270   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.158285   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.158302   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.160222   84284 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0419 23:58:17.158740   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.158824   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.158947   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.159026   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.161558   84284 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0419 23:58:17.161568   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0419 23:58:17.161585   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.161625   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.161643   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.162344   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.162408   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.162506   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.162599   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.162780   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.163041   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.166151   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35745
	I0419 23:58:17.166774   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.167347   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.167364   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.168286   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.168327   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.168908   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.168946   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.169368   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.169401   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.169574   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.169831   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.170009   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.170150   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.176972   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
	I0419 23:58:17.177277   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34457
	I0419 23:58:17.177685   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.177786   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.178171   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.178191   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.178329   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.178347   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.178556   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.178739   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.178928   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.179718   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:17.179751   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:17.180646   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.180712   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36007
	I0419 23:58:17.182462   84284 out.go:177]   - Using image docker.io/registry:2.8.3
	I0419 23:58:17.181493   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.182268   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33611
	I0419 23:58:17.183031   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43231
	I0419 23:58:17.185398   84284 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0419 23:58:17.184401   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.184428   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.184477   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.185512   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41773
	I0419 23:58:17.185920   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46479
	I0419 23:58:17.186747   84284 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0419 23:58:17.186765   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0419 23:58:17.186792   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.186840   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.188206   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.188224   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.188302   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.188430   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.188442   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.188811   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.188843   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.188811   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.189036   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.189103   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.189688   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.190240   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.190468   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.190490   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.190875   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.191028   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.191048   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.191154   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.191222   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.191395   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.191565   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.193379   84284 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0419 23:58:17.192458   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44425
	I0419 23:58:17.192684   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42947
	I0419 23:58:17.193203   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.193862   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.194615   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.193907   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.193919   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.194685   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.194706   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.194416   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.194829   84284 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0419 23:58:17.194841   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0419 23:58:17.194857   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.196296   84284 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0419 23:58:17.194954   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.195221   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.196028   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.198027   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.198073   84284 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0419 23:58:17.198209   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.198523   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.198703   84284 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0419 23:58:17.198711   84284 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0419 23:58:17.199327   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.199961   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.200006   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0419 23:58:17.200031   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.200081   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0419 23:58:17.201664   84284 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0419 23:58:17.201691   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0419 23:58:17.201710   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.203274   84284 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0419 23:58:17.203298   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0419 23:58:17.203316   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.200405   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.200439   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.200461   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.201067   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.203411   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.200134   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.203452   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.203459   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.202840   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.203490   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.203141   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0419 23:58:17.203509   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.204437   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.205175   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0419 23:58:17.204445   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.206280   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.204475   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.204881   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.205291   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.205337   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.205363   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.206458   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.207478   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.207488   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0419 23:58:17.207512   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.207611   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.207886   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.208692   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0419 23:58:17.208813   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.208492   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36129
	I0419 23:58:17.208865   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.207901   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.209021   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.210001   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0419 23:58:17.209078   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.209646   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.210192   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.210879   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:17.212462   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0419 23:58:17.211566   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.211607   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.211625   84284 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0419 23:58:17.211888   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:17.213723   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:17.213787   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0419 23:58:17.215077   84284 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0419 23:58:17.215094   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0419 23:58:17.215108   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.213922   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0419 23:58:17.215173   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.216606   84284 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0419 23:58:17.214103   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.214113   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:17.217881   84284 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0419 23:58:17.217897   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0419 23:58:17.217912   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.218103   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.218235   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.218607   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.218628   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.218708   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:17.219406   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.219656   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.219953   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.219958   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.220003   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.220107   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.220279   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.220293   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.220507   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.220686   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.220895   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:17.220982   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:17.222868   84284 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0419 23:58:17.221845   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.222898   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.222911   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.222383   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.223101   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.224424   84284 out.go:177]   - Using image docker.io/busybox:stable
	I0419 23:58:17.225725   84284 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0419 23:58:17.225739   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0419 23:58:17.225751   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:17.224594   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.226235   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	W0419 23:58:17.226948   84284 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54470->192.168.39.36:22: read: connection reset by peer
	I0419 23:58:17.226974   84284 retry.go:31] will retry after 297.279332ms: ssh: handshake failed: read tcp 192.168.39.1:54470->192.168.39.36:22: read: connection reset by peer
	I0419 23:58:17.228148   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.228413   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:17.228430   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:17.228578   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:17.228722   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:17.228824   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:17.228922   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	W0419 23:58:17.234477   84284 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54478->192.168.39.36:22: read: connection reset by peer
	I0419 23:58:17.234495   84284 retry.go:31] will retry after 179.667439ms: ssh: handshake failed: read tcp 192.168.39.1:54478->192.168.39.36:22: read: connection reset by peer
	I0419 23:58:17.656047   84284 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0419 23:58:17.656082   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0419 23:58:17.660452   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0419 23:58:17.698158   84284 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0419 23:58:17.698188   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0419 23:58:17.706586   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0419 23:58:17.752530   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0419 23:58:17.765867   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 23:58:17.769214   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0419 23:58:17.781458   84284 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0419 23:58:17.781479   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0419 23:58:17.809621   84284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0419 23:58:17.809650   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0419 23:58:17.850530   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0419 23:58:17.869545   84284 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0419 23:58:17.869584   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0419 23:58:17.904122   84284 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0419 23:58:17.904145   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0419 23:58:17.908626   84284 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0419 23:58:17.908653   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0419 23:58:17.927918   84284 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0419 23:58:17.927942   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0419 23:58:18.039653   84284 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0419 23:58:18.039687   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0419 23:58:18.059068   84284 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0419 23:58:18.059104   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0419 23:58:18.081605   84284 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0419 23:58:18.081641   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0419 23:58:18.108763   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0419 23:58:18.132297   84284 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0419 23:58:18.132332   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0419 23:58:18.150127   84284 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0419 23:58:18.150152   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0419 23:58:18.204200   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0419 23:58:18.227605   84284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0419 23:58:18.227640   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0419 23:58:18.238396   84284 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0419 23:58:18.238428   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0419 23:58:18.264758   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0419 23:58:18.270051   84284 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0419 23:58:18.270072   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0419 23:58:18.341132   84284 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0419 23:58:18.341159   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0419 23:58:18.346362   84284 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0419 23:58:18.346402   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0419 23:58:18.387851   84284 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.33299283s)
	I0419 23:58:18.387933   84284 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.329587298s)
	I0419 23:58:18.388001   84284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 23:58:18.388022   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0419 23:58:18.445983   84284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0419 23:58:18.446010   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0419 23:58:18.448078   84284 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0419 23:58:18.448094   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0419 23:58:18.523488   84284 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0419 23:58:18.523516   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0419 23:58:18.603546   84284 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0419 23:58:18.603573   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0419 23:58:18.617713   84284 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0419 23:58:18.617735   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0419 23:58:18.809927   84284 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0419 23:58:18.809952   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0419 23:58:18.951697   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0419 23:58:18.961685   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0419 23:58:18.966655   84284 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0419 23:58:18.966680   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0419 23:58:19.042624   84284 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0419 23:58:19.042657   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0419 23:58:19.268273   84284 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0419 23:58:19.268301   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0419 23:58:19.432233   84284 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0419 23:58:19.432261   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0419 23:58:19.443919   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0419 23:58:19.735435   84284 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0419 23:58:19.735461   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0419 23:58:19.784294   84284 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0419 23:58:19.784328   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0419 23:58:20.023070   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0419 23:58:20.079344   84284 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0419 23:58:20.079376   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0419 23:58:20.506716   84284 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0419 23:58:20.506742   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0419 23:58:21.005879   84284 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0419 23:58:21.005907   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0419 23:58:21.646602   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0419 23:58:21.971302   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.264674032s)
	I0419 23:58:21.971375   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:21.971392   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:21.971773   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:21.971835   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:21.971855   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:21.971868   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:21.971867   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:21.972065   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.311582701s)
	I0419 23:58:21.972101   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:21.972206   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:21.972125   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:21.972181   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:21.972261   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:21.972466   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:21.972476   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:21.972527   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:21.972542   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:21.972566   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:21.972774   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:21.972849   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:21.972861   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:22.031290   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:22.031312   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:22.031603   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:22.031650   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:24.235486   84284 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0419 23:58:24.235534   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:24.238287   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:24.238746   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:24.238778   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:24.238959   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:24.239174   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:24.239335   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:24.239505   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:25.063304   84284 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0419 23:58:25.379287   84284 addons.go:234] Setting addon gcp-auth=true in "addons-903502"
	I0419 23:58:25.379351   84284 host.go:66] Checking if "addons-903502" exists ...
	I0419 23:58:25.379704   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:25.379739   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:25.394724   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42965
	I0419 23:58:25.395123   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:25.395691   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:25.395722   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:25.396082   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:25.396657   84284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 23:58:25.396701   84284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 23:58:25.444717   84284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I0419 23:58:25.445159   84284 main.go:141] libmachine: () Calling .GetVersion
	I0419 23:58:25.445722   84284 main.go:141] libmachine: Using API Version  1
	I0419 23:58:25.445745   84284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 23:58:25.446272   84284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 23:58:25.446540   84284 main.go:141] libmachine: (addons-903502) Calling .GetState
	I0419 23:58:25.448393   84284 main.go:141] libmachine: (addons-903502) Calling .DriverName
	I0419 23:58:25.448660   84284 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0419 23:58:25.448692   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHHostname
	I0419 23:58:25.451410   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:25.451830   84284 main.go:141] libmachine: (addons-903502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:a2:50", ip: ""} in network mk-addons-903502: {Iface:virbr1 ExpiryTime:2024-04-20 00:57:38 +0000 UTC Type:0 Mac:52:54:00:6a:a2:50 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-903502 Clientid:01:52:54:00:6a:a2:50}
	I0419 23:58:25.451859   84284 main.go:141] libmachine: (addons-903502) DBG | domain addons-903502 has defined IP address 192.168.39.36 and MAC address 52:54:00:6a:a2:50 in network mk-addons-903502
	I0419 23:58:25.451966   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHPort
	I0419 23:58:25.452148   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHKeyPath
	I0419 23:58:25.452312   84284 main.go:141] libmachine: (addons-903502) Calling .GetSSHUsername
	I0419 23:58:25.452441   84284 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/addons-903502/id_rsa Username:docker}
	I0419 23:58:26.758236   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.992328895s)
	I0419 23:58:26.758303   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758317   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758354   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.989107327s)
	I0419 23:58:26.758396   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758412   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758423   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.005854567s)
	I0419 23:58:26.758448   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.649652744s)
	I0419 23:58:26.758424   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.907826902s)
	I0419 23:58:26.758472   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758453   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758483   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758486   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758511   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.554281271s)
	I0419 23:58:26.758473   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758531   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758533   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758541   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758573   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.493771789s)
	I0419 23:58:26.758590   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758601   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.758602   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758614   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.758625   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.758632   84284 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.370613804s)
	I0419 23:58:26.758636   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758645   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758652   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.758653   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.758660   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.758669   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758682   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758729   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.758752   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.758760   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.758768   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758775   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.758803   84284 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.370760805s)
	I0419 23:58:26.758840   84284 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0419 23:58:26.758911   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.807182396s)
	I0419 23:58:26.758936   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.758949   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.759051   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.797335839s)
	I0419 23:58:26.759069   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.759078   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.759203   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.315247755s)
	W0419 23:58:26.759234   84284 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0419 23:58:26.759257   84284 retry.go:31] will retry after 186.491444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0419 23:58:26.759340   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.736238377s)
	I0419 23:58:26.759358   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.759368   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.759427   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.759443   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.759464   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.759471   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.759478   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.759485   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.759531   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.759539   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.759547   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.759553   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.759590   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.759609   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.759616   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.759624   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.759630   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.759665   84284 node_ready.go:35] waiting up to 6m0s for node "addons-903502" to be "Ready" ...
	I0419 23:58:26.759708   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.759732   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.759739   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.759750   84284 addons.go:470] Verifying addon ingress=true in "addons-903502"
	I0419 23:58:26.765061   84284 out.go:177] * Verifying ingress addon...
	I0419 23:58:26.759849   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.759874   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766431   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.759889   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.759907   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766490   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.766496   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.760098   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.760123   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766536   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.760140   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.760156   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766601   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.760350   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766619   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.766629   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.766630   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.761362   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.761390   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766703   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.761405   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.761420   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766791   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.766802   84284 addons.go:470] Verifying addon metrics-server=true in "addons-903502"
	I0419 23:58:26.761922   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766861   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.766870   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.761945   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.766879   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.764956   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.764972   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766930   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.766939   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.766946   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.766962   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.766502   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.766975   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.766986   84284 addons.go:470] Verifying addon registry=true in "addons-903502"
	I0419 23:58:26.760375   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.769072   84284 out.go:177] * Verifying registry addon...
	I0419 23:58:26.767060   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.767075   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.767251   84284 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0419 23:58:26.767268   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.767285   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.767456   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.767489   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.770251   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.770283   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.770293   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.771499   84284 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-903502 service yakd-dashboard -n yakd-dashboard
	
	I0419 23:58:26.770940   84284 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0419 23:58:26.784581   84284 node_ready.go:49] node "addons-903502" has status "Ready":"True"
	I0419 23:58:26.784599   84284 node_ready.go:38] duration metric: took 24.91739ms for node "addons-903502" to be "Ready" ...
	I0419 23:58:26.784607   84284 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 23:58:26.787562   84284 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0419 23:58:26.787584   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:26.799952   84284 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0419 23:58:26.799976   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:26.823324   84284 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2dd9g" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:26.840790   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:26.840809   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:26.841113   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:26.841130   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:26.841169   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:26.858056   84284 pod_ready.go:92] pod "coredns-7db6d8ff4d-2dd9g" in "kube-system" namespace has status "Ready":"True"
	I0419 23:58:26.858090   84284 pod_ready.go:81] duration metric: took 34.73522ms for pod "coredns-7db6d8ff4d-2dd9g" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:26.858104   84284 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tjjdl" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:26.891970   84284 pod_ready.go:92] pod "coredns-7db6d8ff4d-tjjdl" in "kube-system" namespace has status "Ready":"True"
	I0419 23:58:26.891999   84284 pod_ready.go:81] duration metric: took 33.886562ms for pod "coredns-7db6d8ff4d-tjjdl" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:26.892012   84284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-903502" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:26.946839   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0419 23:58:26.989913   84284 pod_ready.go:92] pod "etcd-addons-903502" in "kube-system" namespace has status "Ready":"True"
	I0419 23:58:26.989937   84284 pod_ready.go:81] duration metric: took 97.916362ms for pod "etcd-addons-903502" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:26.989949   84284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-903502" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:27.022682   84284 pod_ready.go:92] pod "kube-apiserver-addons-903502" in "kube-system" namespace has status "Ready":"True"
	I0419 23:58:27.022705   84284 pod_ready.go:81] duration metric: took 32.748664ms for pod "kube-apiserver-addons-903502" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:27.022717   84284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-903502" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:27.166905   84284 pod_ready.go:92] pod "kube-controller-manager-addons-903502" in "kube-system" namespace has status "Ready":"True"
	I0419 23:58:27.166930   84284 pod_ready.go:81] duration metric: took 144.204537ms for pod "kube-controller-manager-addons-903502" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:27.166945   84284 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v7nxm" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:27.270455   84284 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-903502" context rescaled to 1 replicas
	I0419 23:58:27.274941   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:27.278436   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:27.563847   84284 pod_ready.go:92] pod "kube-proxy-v7nxm" in "kube-system" namespace has status "Ready":"True"
	I0419 23:58:27.563871   84284 pod_ready.go:81] duration metric: took 396.91828ms for pod "kube-proxy-v7nxm" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:27.563883   84284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-903502" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:27.786804   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:27.790094   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:27.964301   84284 pod_ready.go:92] pod "kube-scheduler-addons-903502" in "kube-system" namespace has status "Ready":"True"
	I0419 23:58:27.964322   84284 pod_ready.go:81] duration metric: took 400.430953ms for pod "kube-scheduler-addons-903502" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:27.964332   84284 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace to be "Ready" ...
	I0419 23:58:28.319931   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:28.323768   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:28.439685   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.793017842s)
	I0419 23:58:28.439748   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:28.439763   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:28.439771   84284 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.991083742s)
	I0419 23:58:28.441011   84284 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0419 23:58:28.440070   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:28.440096   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:28.443182   84284 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0419 23:58:28.442083   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:28.444371   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:28.444382   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:28.444437   84284 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0419 23:58:28.444461   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0419 23:58:28.444655   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:28.444718   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:28.444733   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:28.444744   84284 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-903502"
	I0419 23:58:28.446013   84284 out.go:177] * Verifying csi-hostpath-driver addon...
	I0419 23:58:28.447829   84284 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0419 23:58:28.483817   84284 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0419 23:58:28.483840   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0419 23:58:28.534693   84284 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0419 23:58:28.534719   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:28.639829   84284 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0419 23:58:28.639864   84284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0419 23:58:28.777678   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:28.777922   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:28.802522   84284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0419 23:58:28.954009   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:29.275825   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:29.277689   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:29.416743   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.469836575s)
	I0419 23:58:29.416805   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:29.416818   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:29.417160   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:29.417200   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:29.417218   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:29.417239   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:29.417251   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:29.417572   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:29.417591   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:29.417621   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:29.478690   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:29.777912   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:29.780177   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:29.957136   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:29.972034   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:30.300479   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:30.307465   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:30.372590   84284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.570026582s)
	I0419 23:58:30.372710   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:30.372725   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:30.373161   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:30.373180   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:30.373190   84284 main.go:141] libmachine: Making call to close driver server
	I0419 23:58:30.373198   84284 main.go:141] libmachine: (addons-903502) Calling .Close
	I0419 23:58:30.373510   84284 main.go:141] libmachine: (addons-903502) DBG | Closing plugin on server side
	I0419 23:58:30.373566   84284 main.go:141] libmachine: Successfully made call to close driver server
	I0419 23:58:30.373586   84284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 23:58:30.375624   84284 addons.go:470] Verifying addon gcp-auth=true in "addons-903502"
	I0419 23:58:30.377247   84284 out.go:177] * Verifying gcp-auth addon...
	I0419 23:58:30.379310   84284 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0419 23:58:30.418268   84284 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0419 23:58:30.418290   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:30.484765   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:30.776316   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:30.784813   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:30.887478   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:30.953663   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:31.279558   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:31.284373   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:31.384500   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:31.453568   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:31.780329   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:31.781619   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:31.884903   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:31.953709   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:32.273866   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:32.276647   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:32.382677   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:32.456358   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:32.471150   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:32.775752   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:32.778757   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:32.884068   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:32.954193   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:33.274753   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:33.278106   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:33.383189   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:33.453125   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:33.775619   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:33.778295   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:33.893036   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:33.953753   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:34.515835   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:34.515869   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:34.516576   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:34.516812   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:34.520404   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:34.775352   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:34.778236   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:34.883688   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:34.953625   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:35.275880   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:35.278646   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:35.386481   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:35.455046   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:35.777112   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:35.777929   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:35.883013   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:35.954686   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:36.276301   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:36.278196   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:36.382977   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:36.457227   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:36.774696   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:36.777177   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:36.883489   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:36.954055   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:36.984075   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:37.274615   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:37.277555   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:37.384027   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:37.454094   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:37.774875   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:37.777447   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:37.884601   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:37.955753   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:38.276372   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:38.279691   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:38.383932   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:38.454142   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:38.774264   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:38.777161   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:38.883282   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:38.953376   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:39.276321   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:39.278315   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:39.383115   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:39.452758   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:39.469520   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:39.775191   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:39.777110   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:39.884170   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:39.952969   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:40.275453   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:40.277419   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:40.383832   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:40.454015   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:40.775047   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:40.778016   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:40.883101   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:40.953668   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:41.275585   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:41.279084   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:41.382740   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:41.453118   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:41.470362   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:41.775177   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:41.778206   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:41.883727   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:41.954577   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:42.309961   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:42.310763   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:42.383993   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:42.453931   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:42.774540   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:42.777005   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:42.883406   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:42.954230   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:43.276499   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:43.277478   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:43.384975   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:43.458697   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:43.472920   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:43.777439   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:43.791039   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:43.890757   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:43.953947   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:44.275621   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:44.278829   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:44.385026   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:44.455048   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:44.778453   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:44.779321   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:44.883744   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:44.955049   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:45.274956   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:45.277579   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:45.384302   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:45.453630   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:45.775545   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:45.777219   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:45.885356   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:45.953685   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:45.971520   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:46.279345   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:46.284210   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:46.383477   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:46.461044   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:46.775015   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:46.777358   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:46.888038   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:46.954412   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:47.354363   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:47.354493   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:47.384242   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:47.453969   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:47.775470   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:47.778045   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:47.884050   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:47.954632   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:47.971853   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:48.275512   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:48.278626   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:48.383644   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:48.454428   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:48.775383   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:48.777598   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:48.882675   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:48.954160   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:49.277978   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:49.280404   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:49.383432   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:49.454217   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:49.776658   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:49.779235   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:49.886213   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:49.954830   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:50.276031   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:50.279234   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:50.383552   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:50.453825   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:50.470029   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:50.775952   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:50.778111   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:50.883451   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:50.954212   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:51.275449   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:51.278138   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:51.384165   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:51.454364   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:51.775683   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:51.784590   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:51.883716   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:51.957357   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:52.275939   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:52.282987   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:52.383057   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:52.456731   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:52.474466   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:52.775876   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:52.778296   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:52.883555   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:52.953479   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:53.276335   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:53.279210   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:53.384294   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:53.453946   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:53.775662   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:53.778324   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:53.883431   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:53.953225   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:54.274930   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:54.277653   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:54.384158   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:54.454114   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:54.775117   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:54.779678   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:54.882773   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:54.956351   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:54.969396   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:55.274499   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:55.278294   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:55.383286   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:55.454284   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:55.775413   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:55.783091   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:55.883262   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:55.953822   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:56.765609   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:56.775033   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:56.777162   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:56.790892   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:56.804450   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:56.804575   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:56.888803   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:56.953575   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:56.971614   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:57.275037   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:57.289849   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:57.387661   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:57.454386   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:57.775026   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:57.784599   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:57.884377   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:57.954852   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:58.276608   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:58.279357   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:58.383169   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:58.453059   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:58.775756   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:58.778038   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:59.180461   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:59.183745   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:58:59.184428   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:59.275383   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:59.278333   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:59.383899   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:59.454124   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:58:59.775595   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:58:59.778370   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:58:59.884691   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:58:59.954647   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:00.284108   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:00.290041   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:00.386943   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:00.461218   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:00.775071   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:00.778897   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:00.883574   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:00.960357   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:01.275169   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:01.278648   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:01.383575   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:01.454243   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:01.469834   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:01.776827   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:01.778401   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:01.884087   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:01.954302   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:02.275393   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:02.283347   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:02.386785   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:02.454133   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:02.775132   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:02.780745   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:02.888200   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:02.953593   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:03.281156   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:03.284899   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:03.391455   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:03.458999   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:03.492024   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:03.792222   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:03.792284   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:03.883294   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:03.953197   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:04.376044   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:04.387662   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:04.419129   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:04.557004   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:04.785916   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:04.786393   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:04.883095   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:04.955697   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:05.275312   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:05.277773   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:05.382791   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:05.454176   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:05.775113   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:05.777464   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:05.885380   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:05.953214   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:05.969675   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:06.275332   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:06.279041   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:06.385200   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:06.453685   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:06.774945   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:06.777812   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:06.882994   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:06.957224   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:07.569260   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:07.582296   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:07.584056   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:07.584777   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:07.781008   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:07.784545   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:07.883022   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:07.954162   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:07.969899   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:08.276940   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:08.277898   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:08.383203   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:08.454658   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:08.775669   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:08.778393   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:08.893027   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:08.954218   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:09.285197   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:09.286522   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:09.383990   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:09.454434   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:09.776318   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:09.778798   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:09.883183   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:09.955000   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:09.970535   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:10.275880   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:10.288218   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:10.383235   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:10.454241   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:10.774577   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:10.777125   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:10.883383   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:10.953541   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:11.276798   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:11.278750   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:11.382533   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:11.453511   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:11.775327   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:11.777799   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:11.889902   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:11.953931   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:11.970926   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:12.429273   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:12.430003   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:12.430271   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:12.455069   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:12.775636   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:12.779814   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:12.883456   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:12.954690   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:13.275141   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:13.278479   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:13.383399   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:13.470023   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:13.775126   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:13.777257   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 23:59:13.882904   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:13.958582   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:13.972579   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:14.275579   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:14.278094   84284 kapi.go:107] duration metric: took 47.507149601s to wait for kubernetes.io/minikube-addons=registry ...
	I0419 23:59:14.386120   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:14.454534   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:14.775500   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:14.887808   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:14.953750   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:15.274652   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:15.383598   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:15.454017   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:15.777154   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:15.888877   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:15.954120   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:16.277398   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:16.385033   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:16.455082   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:16.475572   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:16.775468   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:16.885244   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:16.956770   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:17.274789   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:17.410048   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:17.455376   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:17.775199   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:17.883508   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:17.958066   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:18.276017   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:18.383507   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:18.455325   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:18.775039   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:18.882986   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:18.953890   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:18.986829   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:19.579639   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:19.580104   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:19.580658   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:19.774591   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:19.886255   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:19.953117   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:20.275507   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:20.383590   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:20.454090   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:20.774805   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:20.883938   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:20.954232   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:21.274525   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:21.384012   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:21.454496   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:21.473675   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:21.774890   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:21.883942   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:21.953590   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:22.275898   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:22.383566   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:22.454222   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:22.775235   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:22.886438   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:22.954738   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:23.274622   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:23.387976   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:23.453988   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:23.776542   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:23.883860   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:23.955174   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:23.971957   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:24.276603   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:24.383510   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:24.455449   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:24.775244   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:24.886926   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:24.971637   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:25.274578   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:25.384170   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:25.454248   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:25.775758   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:25.884447   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:25.953948   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:26.280921   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:26.384059   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:26.454636   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:26.469783   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:26.775669   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:26.884811   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:26.954999   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:27.276314   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:27.385600   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:27.454968   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:27.775213   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:27.895586   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:27.964010   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:28.275257   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:28.395010   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:28.460720   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:28.474516   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:28.775127   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:28.882997   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:28.954778   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:29.274923   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:29.382886   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:29.454243   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:29.778377   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:29.884947   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:29.955834   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:30.275175   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:30.383695   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:30.457942   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:30.776250   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:30.882927   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:30.953908   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:30.970373   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:31.274286   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:31.386100   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:31.457969   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:31.777691   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:31.884327   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:31.953528   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:32.277180   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:32.386774   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:32.454923   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:32.775535   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:32.883221   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:32.954136   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 23:59:32.972704   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:33.275646   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:33.383230   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:33.460835   84284 kapi.go:107] duration metric: took 1m5.013001457s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0419 23:59:33.782907   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:33.884085   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:34.275364   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:34.383752   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:34.873957   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:34.883691   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:34.972846   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:35.274703   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:35.383465   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:35.774279   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:35.883343   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:36.275812   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:36.383568   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:36.774949   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:36.882957   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:37.275098   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:37.383034   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:37.477347   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:37.776010   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:38.228899   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:38.275127   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:38.386516   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:38.775544   84284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 23:59:38.884459   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:39.276956   84284 kapi.go:107] duration metric: took 1m12.509703827s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0419 23:59:39.382180   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:39.883727   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:39.973518   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:40.383207   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:40.885406   84284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 23:59:41.384229   84284 kapi.go:107] duration metric: took 1m11.004915495s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0419 23:59:41.385793   84284 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-903502 cluster.
	I0419 23:59:41.387129   84284 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0419 23:59:41.388380   84284 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0419 23:59:41.389753   84284 out.go:177] * Enabled addons: ingress-dns, default-storageclass, cloud-spanner, storage-provisioner, nvidia-device-plugin, metrics-server, helm-tiller, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0419 23:59:41.391126   84284 addons.go:505] duration metric: took 1m24.336108338s for enable addons: enabled=[ingress-dns default-storageclass cloud-spanner storage-provisioner nvidia-device-plugin metrics-server helm-tiller inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0419 23:59:42.472275   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:44.973215   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:47.470219   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:49.471469   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:51.472949   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:53.475400   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:55.971902   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0419 23:59:58.471149   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0420 00:00:00.471854   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0420 00:00:02.972137   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0420 00:00:05.478061   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0420 00:00:07.971796   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0420 00:00:10.472071   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0420 00:00:12.475990   84284 pod_ready.go:102] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"False"
	I0420 00:00:14.475200   84284 pod_ready.go:92] pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace has status "Ready":"True"
	I0420 00:00:14.475226   84284 pod_ready.go:81] duration metric: took 1m46.510887705s for pod "metrics-server-c59844bb4-msq6m" in "kube-system" namespace to be "Ready" ...
	I0420 00:00:14.475238   84284 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-gxtqp" in "kube-system" namespace to be "Ready" ...
	I0420 00:00:14.480368   84284 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-gxtqp" in "kube-system" namespace has status "Ready":"True"
	I0420 00:00:14.480388   84284 pod_ready.go:81] duration metric: took 5.143341ms for pod "nvidia-device-plugin-daemonset-gxtqp" in "kube-system" namespace to be "Ready" ...
	I0420 00:00:14.480406   84284 pod_ready.go:38] duration metric: took 1m47.695787345s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 00:00:14.480426   84284 api_server.go:52] waiting for apiserver process to appear ...
	I0420 00:00:14.480471   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 00:00:14.480540   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 00:00:14.533066   84284 cri.go:89] found id: "330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46"
	I0420 00:00:14.533098   84284 cri.go:89] found id: ""
	I0420 00:00:14.533109   84284 logs.go:276] 1 containers: [330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46]
	I0420 00:00:14.533168   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:14.538162   84284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 00:00:14.538237   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 00:00:14.598791   84284 cri.go:89] found id: "274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd"
	I0420 00:00:14.598817   84284 cri.go:89] found id: ""
	I0420 00:00:14.598825   84284 logs.go:276] 1 containers: [274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd]
	I0420 00:00:14.598873   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:14.604201   84284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 00:00:14.604279   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 00:00:14.655424   84284 cri.go:89] found id: "7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8"
	I0420 00:00:14.655447   84284 cri.go:89] found id: ""
	I0420 00:00:14.655455   84284 logs.go:276] 1 containers: [7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8]
	I0420 00:00:14.655502   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:14.660870   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 00:00:14.660936   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 00:00:14.708699   84284 cri.go:89] found id: "f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf"
	I0420 00:00:14.708733   84284 cri.go:89] found id: ""
	I0420 00:00:14.708743   84284 logs.go:276] 1 containers: [f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf]
	I0420 00:00:14.708808   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:14.713454   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 00:00:14.713522   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 00:00:14.759427   84284 cri.go:89] found id: "83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751"
	I0420 00:00:14.759454   84284 cri.go:89] found id: ""
	I0420 00:00:14.759464   84284 logs.go:276] 1 containers: [83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751]
	I0420 00:00:14.759523   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:14.764711   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 00:00:14.764781   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 00:00:14.828404   84284 cri.go:89] found id: "4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a"
	I0420 00:00:14.828433   84284 cri.go:89] found id: ""
	I0420 00:00:14.828444   84284 logs.go:276] 1 containers: [4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a]
	I0420 00:00:14.828500   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:14.834365   84284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 00:00:14.834419   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 00:00:14.886752   84284 cri.go:89] found id: ""
	I0420 00:00:14.886783   84284 logs.go:276] 0 containers: []
	W0420 00:00:14.886792   84284 logs.go:278] No container was found matching "kindnet"
	I0420 00:00:14.886802   84284 logs.go:123] Gathering logs for dmesg ...
	I0420 00:00:14.886819   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 00:00:14.904150   84284 logs.go:123] Gathering logs for coredns [7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8] ...
	I0420 00:00:14.904188   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8"
	I0420 00:00:14.947152   84284 logs.go:123] Gathering logs for kube-scheduler [f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf] ...
	I0420 00:00:14.947187   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf"
	I0420 00:00:15.002932   84284 logs.go:123] Gathering logs for kube-proxy [83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751] ...
	I0420 00:00:15.002967   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751"
	I0420 00:00:15.050711   84284 logs.go:123] Gathering logs for container status ...
	I0420 00:00:15.050744   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 00:00:15.132837   84284 logs.go:123] Gathering logs for kubelet ...
	I0420 00:00:15.132875   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 00:00:15.220983   84284 logs.go:123] Gathering logs for kube-apiserver [330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46] ...
	I0420 00:00:15.221024   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46"
	I0420 00:00:15.276789   84284 logs.go:123] Gathering logs for etcd [274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd] ...
	I0420 00:00:15.276833   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd"
	I0420 00:00:15.351358   84284 logs.go:123] Gathering logs for kube-controller-manager [4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a] ...
	I0420 00:00:15.351399   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a"
	I0420 00:00:15.420349   84284 logs.go:123] Gathering logs for CRI-O ...
	I0420 00:00:15.420391   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 00:00:16.198732   84284 logs.go:123] Gathering logs for describe nodes ...
	I0420 00:00:16.198780   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 00:00:18.846813   84284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:00:18.871529   84284 api_server.go:72] duration metric: took 2m1.816594549s to wait for apiserver process to appear ...
	I0420 00:00:18.871560   84284 api_server.go:88] waiting for apiserver healthz status ...
	I0420 00:00:18.871600   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 00:00:18.871660   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 00:00:18.918636   84284 cri.go:89] found id: "330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46"
	I0420 00:00:18.918672   84284 cri.go:89] found id: ""
	I0420 00:00:18.918684   84284 logs.go:276] 1 containers: [330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46]
	I0420 00:00:18.918756   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:18.923759   84284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 00:00:18.923849   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 00:00:18.980291   84284 cri.go:89] found id: "274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd"
	I0420 00:00:18.980330   84284 cri.go:89] found id: ""
	I0420 00:00:18.980343   84284 logs.go:276] 1 containers: [274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd]
	I0420 00:00:18.980414   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:18.987749   84284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 00:00:18.987831   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 00:00:19.032315   84284 cri.go:89] found id: "7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8"
	I0420 00:00:19.032351   84284 cri.go:89] found id: ""
	I0420 00:00:19.032364   84284 logs.go:276] 1 containers: [7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8]
	I0420 00:00:19.032436   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:19.037059   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 00:00:19.037126   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 00:00:19.081031   84284 cri.go:89] found id: "f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf"
	I0420 00:00:19.081059   84284 cri.go:89] found id: ""
	I0420 00:00:19.081068   84284 logs.go:276] 1 containers: [f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf]
	I0420 00:00:19.081123   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:19.086941   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 00:00:19.087032   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 00:00:19.133890   84284 cri.go:89] found id: "83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751"
	I0420 00:00:19.133922   84284 cri.go:89] found id: ""
	I0420 00:00:19.133933   84284 logs.go:276] 1 containers: [83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751]
	I0420 00:00:19.133995   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:19.138910   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 00:00:19.138989   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 00:00:19.197648   84284 cri.go:89] found id: "4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a"
	I0420 00:00:19.197674   84284 cri.go:89] found id: ""
	I0420 00:00:19.197687   84284 logs.go:276] 1 containers: [4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a]
	I0420 00:00:19.197750   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:19.203824   84284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 00:00:19.203895   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 00:00:19.248219   84284 cri.go:89] found id: ""
	I0420 00:00:19.248248   84284 logs.go:276] 0 containers: []
	W0420 00:00:19.248256   84284 logs.go:278] No container was found matching "kindnet"
	I0420 00:00:19.248265   84284 logs.go:123] Gathering logs for kube-apiserver [330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46] ...
	I0420 00:00:19.248278   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46"
	I0420 00:00:19.299308   84284 logs.go:123] Gathering logs for etcd [274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd] ...
	I0420 00:00:19.299343   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd"
	I0420 00:00:19.375220   84284 logs.go:123] Gathering logs for coredns [7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8] ...
	I0420 00:00:19.375253   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8"
	I0420 00:00:19.426768   84284 logs.go:123] Gathering logs for kube-proxy [83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751] ...
	I0420 00:00:19.426798   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751"
	I0420 00:00:19.483729   84284 logs.go:123] Gathering logs for kube-controller-manager [4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a] ...
	I0420 00:00:19.483765   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a"
	I0420 00:00:19.564014   84284 logs.go:123] Gathering logs for container status ...
	I0420 00:00:19.564056   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 00:00:19.631780   84284 logs.go:123] Gathering logs for kubelet ...
	I0420 00:00:19.631825   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 00:00:19.712091   84284 logs.go:123] Gathering logs for describe nodes ...
	I0420 00:00:19.712129   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 00:00:19.850379   84284 logs.go:123] Gathering logs for CRI-O ...
	I0420 00:00:19.850416   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 00:00:20.705535   84284 logs.go:123] Gathering logs for dmesg ...
	I0420 00:00:20.705582   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 00:00:20.722071   84284 logs.go:123] Gathering logs for kube-scheduler [f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf] ...
	I0420 00:00:20.722114   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf"
	I0420 00:00:23.279184   84284 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0420 00:00:23.284925   84284 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0420 00:00:23.286183   84284 api_server.go:141] control plane version: v1.30.0
	I0420 00:00:23.286206   84284 api_server.go:131] duration metric: took 4.414639406s to wait for apiserver health ...
	I0420 00:00:23.286214   84284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 00:00:23.286239   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 00:00:23.286287   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 00:00:23.346161   84284 cri.go:89] found id: "330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46"
	I0420 00:00:23.346191   84284 cri.go:89] found id: ""
	I0420 00:00:23.346202   84284 logs.go:276] 1 containers: [330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46]
	I0420 00:00:23.346261   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:23.355643   84284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 00:00:23.355709   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 00:00:23.407697   84284 cri.go:89] found id: "274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd"
	I0420 00:00:23.407727   84284 cri.go:89] found id: ""
	I0420 00:00:23.407738   84284 logs.go:276] 1 containers: [274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd]
	I0420 00:00:23.407815   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:23.412726   84284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 00:00:23.412795   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 00:00:23.454636   84284 cri.go:89] found id: "7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8"
	I0420 00:00:23.454664   84284 cri.go:89] found id: ""
	I0420 00:00:23.454675   84284 logs.go:276] 1 containers: [7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8]
	I0420 00:00:23.454728   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:23.459915   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 00:00:23.459992   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 00:00:23.535256   84284 cri.go:89] found id: "f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf"
	I0420 00:00:23.535282   84284 cri.go:89] found id: ""
	I0420 00:00:23.535290   84284 logs.go:276] 1 containers: [f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf]
	I0420 00:00:23.535343   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:23.540622   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 00:00:23.540688   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 00:00:23.580641   84284 cri.go:89] found id: "83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751"
	I0420 00:00:23.580669   84284 cri.go:89] found id: ""
	I0420 00:00:23.580678   84284 logs.go:276] 1 containers: [83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751]
	I0420 00:00:23.580732   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:23.587121   84284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 00:00:23.587213   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 00:00:23.630223   84284 cri.go:89] found id: "4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a"
	I0420 00:00:23.630248   84284 cri.go:89] found id: ""
	I0420 00:00:23.630258   84284 logs.go:276] 1 containers: [4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a]
	I0420 00:00:23.630316   84284 ssh_runner.go:195] Run: which crictl
	I0420 00:00:23.637654   84284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 00:00:23.637730   84284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 00:00:23.689237   84284 cri.go:89] found id: ""
	I0420 00:00:23.689274   84284 logs.go:276] 0 containers: []
	W0420 00:00:23.689294   84284 logs.go:278] No container was found matching "kindnet"
	I0420 00:00:23.689316   84284 logs.go:123] Gathering logs for etcd [274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd] ...
	I0420 00:00:23.689336   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd"
	I0420 00:00:23.757462   84284 logs.go:123] Gathering logs for coredns [7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8] ...
	I0420 00:00:23.757501   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8"
	I0420 00:00:23.799690   84284 logs.go:123] Gathering logs for kube-scheduler [f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf] ...
	I0420 00:00:23.799730   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf"
	I0420 00:00:23.853507   84284 logs.go:123] Gathering logs for kube-proxy [83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751] ...
	I0420 00:00:23.853543   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751"
	I0420 00:00:23.895549   84284 logs.go:123] Gathering logs for kube-controller-manager [4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a] ...
	I0420 00:00:23.895586   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a"
	I0420 00:00:23.968131   84284 logs.go:123] Gathering logs for kubelet ...
	I0420 00:00:23.968167   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 00:00:24.054589   84284 logs.go:123] Gathering logs for dmesg ...
	I0420 00:00:24.054625   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 00:00:24.072058   84284 logs.go:123] Gathering logs for describe nodes ...
	I0420 00:00:24.072087   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 00:00:24.234444   84284 logs.go:123] Gathering logs for kube-apiserver [330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46] ...
	I0420 00:00:24.234479   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46"
	I0420 00:00:24.284213   84284 logs.go:123] Gathering logs for CRI-O ...
	I0420 00:00:24.284243   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 00:00:25.146073   84284 logs.go:123] Gathering logs for container status ...
	I0420 00:00:25.146117   84284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 00:00:27.714469   84284 system_pods.go:59] 18 kube-system pods found
	I0420 00:00:27.714511   84284 system_pods.go:61] "coredns-7db6d8ff4d-tjjdl" [a4aaa144-7e87-4738-955d-cee58d25f65f] Running
	I0420 00:00:27.714518   84284 system_pods.go:61] "csi-hostpath-attacher-0" [80ec45ea-7278-4269-9a2b-95c17a5d8905] Running
	I0420 00:00:27.714522   84284 system_pods.go:61] "csi-hostpath-resizer-0" [62a92217-a4e5-4d4d-a1c0-0e3dde22b693] Running
	I0420 00:00:27.714526   84284 system_pods.go:61] "csi-hostpathplugin-cgkxc" [6e8794fe-4529-45f6-8265-b00805d2c5a6] Running
	I0420 00:00:27.714529   84284 system_pods.go:61] "etcd-addons-903502" [a4da6d01-6f9d-4dd7-8e91-d2d164a6f2b5] Running
	I0420 00:00:27.714532   84284 system_pods.go:61] "kube-apiserver-addons-903502" [e70811f3-57b8-4426-b8d6-8dba77808da5] Running
	I0420 00:00:27.714537   84284 system_pods.go:61] "kube-controller-manager-addons-903502" [017deffc-9e48-44c2-83a1-4d8d10b865b2] Running
	I0420 00:00:27.714543   84284 system_pods.go:61] "kube-ingress-dns-minikube" [abc6ceb0-2bb9-4edd-ae34-8021b81671b4] Running
	I0420 00:00:27.714548   84284 system_pods.go:61] "kube-proxy-v7nxm" [f33a980c-c758-4488-86c4-3a4bc3c54cb7] Running
	I0420 00:00:27.714553   84284 system_pods.go:61] "kube-scheduler-addons-903502" [6d9b73d1-7d8d-4b0c-8a23-c0de4c831552] Running
	I0420 00:00:27.714560   84284 system_pods.go:61] "metrics-server-c59844bb4-msq6m" [9f348eb7-76f5-4a36-ad8b-50129a6f3ddf] Running
	I0420 00:00:27.714566   84284 system_pods.go:61] "nvidia-device-plugin-daemonset-gxtqp" [e35a27ed-f4cb-4e7f-a1c3-b0ddcc6c2546] Running
	I0420 00:00:27.714575   84284 system_pods.go:61] "registry-proxy-jstzq" [f7e2cb22-44fa-4141-9d32-90e8315b38f4] Running
	I0420 00:00:27.714580   84284 system_pods.go:61] "registry-qdwvn" [35c4ac3f-fc00-413c-b0e4-a411f7888bf5] Running
	I0420 00:00:27.714593   84284 system_pods.go:61] "snapshot-controller-745499f584-bpl6d" [8f69b6ef-f9a0-42dc-844f-713828861953] Running
	I0420 00:00:27.714596   84284 system_pods.go:61] "snapshot-controller-745499f584-jzsfg" [e779f2a2-0b40-4e3f-9cd5-646fcc84205e] Running
	I0420 00:00:27.714599   84284 system_pods.go:61] "storage-provisioner" [caace344-c304-4889-a0a2-41479039397a] Running
	I0420 00:00:27.714602   84284 system_pods.go:61] "tiller-deploy-6677d64bcd-cjckf" [9d3c558e-6fdb-4a44-b71f-4353e1043b27] Running
	I0420 00:00:27.714611   84284 system_pods.go:74] duration metric: took 4.428391332s to wait for pod list to return data ...
	I0420 00:00:27.714621   84284 default_sa.go:34] waiting for default service account to be created ...
	I0420 00:00:27.716997   84284 default_sa.go:45] found service account: "default"
	I0420 00:00:27.717024   84284 default_sa.go:55] duration metric: took 2.394503ms for default service account to be created ...
	I0420 00:00:27.717034   84284 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 00:00:27.725403   84284 system_pods.go:86] 18 kube-system pods found
	I0420 00:00:27.725425   84284 system_pods.go:89] "coredns-7db6d8ff4d-tjjdl" [a4aaa144-7e87-4738-955d-cee58d25f65f] Running
	I0420 00:00:27.725431   84284 system_pods.go:89] "csi-hostpath-attacher-0" [80ec45ea-7278-4269-9a2b-95c17a5d8905] Running
	I0420 00:00:27.725435   84284 system_pods.go:89] "csi-hostpath-resizer-0" [62a92217-a4e5-4d4d-a1c0-0e3dde22b693] Running
	I0420 00:00:27.725440   84284 system_pods.go:89] "csi-hostpathplugin-cgkxc" [6e8794fe-4529-45f6-8265-b00805d2c5a6] Running
	I0420 00:00:27.725443   84284 system_pods.go:89] "etcd-addons-903502" [a4da6d01-6f9d-4dd7-8e91-d2d164a6f2b5] Running
	I0420 00:00:27.725448   84284 system_pods.go:89] "kube-apiserver-addons-903502" [e70811f3-57b8-4426-b8d6-8dba77808da5] Running
	I0420 00:00:27.725452   84284 system_pods.go:89] "kube-controller-manager-addons-903502" [017deffc-9e48-44c2-83a1-4d8d10b865b2] Running
	I0420 00:00:27.725457   84284 system_pods.go:89] "kube-ingress-dns-minikube" [abc6ceb0-2bb9-4edd-ae34-8021b81671b4] Running
	I0420 00:00:27.725460   84284 system_pods.go:89] "kube-proxy-v7nxm" [f33a980c-c758-4488-86c4-3a4bc3c54cb7] Running
	I0420 00:00:27.725464   84284 system_pods.go:89] "kube-scheduler-addons-903502" [6d9b73d1-7d8d-4b0c-8a23-c0de4c831552] Running
	I0420 00:00:27.725468   84284 system_pods.go:89] "metrics-server-c59844bb4-msq6m" [9f348eb7-76f5-4a36-ad8b-50129a6f3ddf] Running
	I0420 00:00:27.725473   84284 system_pods.go:89] "nvidia-device-plugin-daemonset-gxtqp" [e35a27ed-f4cb-4e7f-a1c3-b0ddcc6c2546] Running
	I0420 00:00:27.725480   84284 system_pods.go:89] "registry-proxy-jstzq" [f7e2cb22-44fa-4141-9d32-90e8315b38f4] Running
	I0420 00:00:27.725485   84284 system_pods.go:89] "registry-qdwvn" [35c4ac3f-fc00-413c-b0e4-a411f7888bf5] Running
	I0420 00:00:27.725491   84284 system_pods.go:89] "snapshot-controller-745499f584-bpl6d" [8f69b6ef-f9a0-42dc-844f-713828861953] Running
	I0420 00:00:27.725495   84284 system_pods.go:89] "snapshot-controller-745499f584-jzsfg" [e779f2a2-0b40-4e3f-9cd5-646fcc84205e] Running
	I0420 00:00:27.725501   84284 system_pods.go:89] "storage-provisioner" [caace344-c304-4889-a0a2-41479039397a] Running
	I0420 00:00:27.725505   84284 system_pods.go:89] "tiller-deploy-6677d64bcd-cjckf" [9d3c558e-6fdb-4a44-b71f-4353e1043b27] Running
	I0420 00:00:27.725513   84284 system_pods.go:126] duration metric: took 8.472084ms to wait for k8s-apps to be running ...
	I0420 00:00:27.725521   84284 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 00:00:27.725563   84284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:00:27.745095   84284 system_svc.go:56] duration metric: took 19.563292ms WaitForService to wait for kubelet
	I0420 00:00:27.745129   84284 kubeadm.go:576] duration metric: took 2m10.69020034s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:00:27.745160   84284 node_conditions.go:102] verifying NodePressure condition ...
	I0420 00:00:27.748692   84284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 00:00:27.748730   84284 node_conditions.go:123] node cpu capacity is 2
	I0420 00:00:27.748748   84284 node_conditions.go:105] duration metric: took 3.581269ms to run NodePressure ...
	I0420 00:00:27.748764   84284 start.go:240] waiting for startup goroutines ...
	I0420 00:00:27.748775   84284 start.go:245] waiting for cluster config update ...
	I0420 00:00:27.748805   84284 start.go:254] writing updated cluster config ...
	I0420 00:00:27.749204   84284 ssh_runner.go:195] Run: rm -f paused
	I0420 00:00:27.800243   84284 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 00:00:27.803122   84284 out.go:177] * Done! kubectl is now configured to use "addons-903502" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.683749892Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=b074a6fa-f906-4bb7-a39e-97d124021712 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.685126360Z" level=debug msg="Request: &StopPodSandboxRequest{PodSandboxId:40599943e203fb70ba9515031a437b67958cba7500102f14e9cdcf3b5d43aa18,}" file="otel-collector/interceptors.go:62" id=4b6c3fb7-24e0-47ee-95bf-5edddfcf7b19 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.685209505Z" level=info msg="Stopping pod sandbox: 40599943e203fb70ba9515031a437b67958cba7500102f14e9cdcf3b5d43aa18" file="server/sandbox_stop.go:18" id=4b6c3fb7-24e0-47ee-95bf-5edddfcf7b19 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.685227874Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9d5e95805281495deff9daa26045d03027ae44c4971eb06490359c495e4f5f42,Metadata:&PodSandboxMetadata{Name:hello-world-app-86c47465fc-wkhlc,Uid:35cf4ddc-6019-4f53-9d02-615978016068,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713571421535234059,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-86c47465fc-wkhlc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35cf4ddc-6019-4f53-9d02-615978016068,pod-template-hash: 86c47465fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T00:03:41.223317703Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a613c505e45e70d66e353380844832346ffdc1172a22ccad870d484bfdd05c4d,Metadata:&PodSandboxMetadata{Name:nginx,Uid:357d421a-b251-4370-be01-0a523ab9c08b,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1713571280192675992,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 357d421a-b251-4370-be01-0a523ab9c08b,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T00:01:19.874743317Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:85b1ea0fd72f3a54468b4b5971b1a2f8342dab06f704689d889b69b2fda02d90,Metadata:&PodSandboxMetadata{Name:headlamp-7559bf459f-g8dbz,Uid:6faf5229-91df-43d8-9dc0-e15e7d5d5f1d,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713571268640611667,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-7559bf459f-g8dbz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 6faf5229-91df-43d8-9dc0-e15e7d5d5f1d,pod-template-hash: 7559bf459f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
04-20T00:01:07.426351268Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:66d2abdb62b3c1be32c1f751ede49d5faf0bcae3e0eeb37c7f8767580fe35796,Metadata:&PodSandboxMetadata{Name:gcp-auth-5db96cd9b4-9gbc6,Uid:7a9b92ac-2ebd-421d-bbb6-1554362125aa,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713571174540925853,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9gbc6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 7a9b92ac-2ebd-421d-bbb6-1554362125aa,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 5db96cd9b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-19T23:58:30.290138462Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0561fc45fce8405e200d4a51d97d2548cd2115660767668d08baff5c28632779,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-5ddbf7d777-s6wnr,Uid:63506f40-47b2-404e-bcd0-27cca6d4d119,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,Creat
edAt:1713571104482462421,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-s6wnr,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 63506f40-47b2-404e-bcd0-27cca6d4d119,pod-template-hash: 5ddbf7d777,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-19T23:58:24.146970862Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:40599943e203fb70ba9515031a437b67958cba7500102f14e9cdcf3b5d43aa18,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-msq6m,Uid:9f348eb7-76f5-4a36-ad8b-50129a6f3ddf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713571104046986670,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-msq6m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f348eb7-76f5-4a36-ad8b-50129a6f3ddf,k8s-app: metr
ics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-19T23:58:23.720358632Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1276d1f0cc15aedbc60131df62fc93f4d3398dccd0ed85722a91f0a51801c072,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:caace344-c304-4889-a0a2-41479039397a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713571102896325171,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caace344-c304-4889-a0a2-41479039397a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\
"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-19T23:58:22.285932642Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7e23cc76801eeee57244c4784caa87ddc1a3d0205075ec542b364d1197ce169a,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-tjjdl,Uid:a4aaa144-7e87-4738-955d-cee58d25f65f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713571097233994881,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-tjjdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4aaa144-7e87-4738-955d-cee58d
25f65f,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-19T23:58:16.900936955Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3834c06220ceba09220f99e8974219dabbe7a0ffb4ab70a35c9426246934feb7,Metadata:&PodSandboxMetadata{Name:kube-proxy-v7nxm,Uid:f33a980c-c758-4488-86c4-3a4bc3c54cb7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713571097094946578,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-v7nxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33a980c-c758-4488-86c4-3a4bc3c54cb7,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-19T23:58:16.774172054Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:244d50fc56b54386c25ecd6ab2a8692c239c10531eeeef93f6e5f7356aa465e4,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-903502
,Uid:65f8ef1e46b514290a289b45fa916a37,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713571077948233819,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f8ef1e46b514290a289b45fa916a37,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.36:8443,kubernetes.io/config.hash: 65f8ef1e46b514290a289b45fa916a37,kubernetes.io/config.seen: 2024-04-19T23:57:57.467888081Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f2d1b81b05076a763a243ac4c2f16a165645f6c9871e506e0ad7a5d40771b925,Metadata:&PodSandboxMetadata{Name:etcd-addons-903502,Uid:122d99b3b4eb697aeba820b61e795f94,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713571077938061360,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-a
ddons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 122d99b3b4eb697aeba820b61e795f94,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.36:2379,kubernetes.io/config.hash: 122d99b3b4eb697aeba820b61e795f94,kubernetes.io/config.seen: 2024-04-19T23:57:57.467884911Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2705afab2635f0b42731216bc57bb8e16a1e067af70bbfebcaa22eb04cad9572,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-903502,Uid:c3f3cb68d60b5a6a1e91cd34f53de8f9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713571077936389587,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f3cb68d60b5a6a1e91cd34f53de8f9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c3f3cb68
d60b5a6a1e91cd34f53de8f9,kubernetes.io/config.seen: 2024-04-19T23:57:57.467889240Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ef417ed11323d70760c1e54fc77a896bf89bc5259ef9b0e243e2732dccd4b8d6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-903502,Uid:110af44b051f67941f6c46f65a3705d9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713571077930461871,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110af44b051f67941f6c46f65a3705d9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 110af44b051f67941f6c46f65a3705d9,kubernetes.io/config.seen: 2024-04-19T23:57:57.467890437Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b074a6fa-f906-4bb7-a39e-97d124021712 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.685573941Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-msq6m Namespace:kube-system ID:40599943e203fb70ba9515031a437b67958cba7500102f14e9cdcf3b5d43aa18 UID:9f348eb7-76f5-4a36-ad8b-50129a6f3ddf NetNS:/var/run/netns/db6cae2b-572a-4018-9769-41bdb58b3842 Networks:[{Name:bridge Ifname:eth0}] RuntimeConfig:map[bridge:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:/kubepods/burstable/pod9f348eb7-76f5-4a36-ad8b-50129a6f3ddf PodAnnotations:0xc000ba82e0}] Aliases:map[]}" file="ocicni/ocicni.go:795"
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.686116194Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cc0e9ff3-40de-44ce-ae53-58372bbbf719 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.686160842Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-msq6m from CNI network \"bridge\" (type=bridge)" file="ocicni/ocicni.go:667"
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.686167417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cc0e9ff3-40de-44ce-ae53-58372bbbf719 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.686947738Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97faaace-777e-4533-be5e-d482a8a03f00 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.686991176Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97faaace-777e-4533-be5e-d482a8a03f00 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.687257468Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:57ebb4f9b3465ac6bddcb74b05b945e17d0ff577bceb4a737d7bf7255d93186c,PodSandboxId:9d5e95805281495deff9daa26045d03027ae44c4971eb06490359c495e4f5f42,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713571424140818049,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-wkhlc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35cf4ddc-6019-4f53-9d02-615978016068,},Annotations:map[string]string{io.kubernetes.container.hash: feff1119,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3f21862f8c129979ab35d893169683ea68ead0638e90c0722fce0c73f4e82b,PodSandboxId:a613c505e45e70d66e353380844832346ffdc1172a22ccad870d484bfdd05c4d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713571282871321373,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 357d421a-b251-4370-be01-0a523ab9c08b,},Annotations:map[string]string{io.kubern
etes.container.hash: f67037d2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c26faebc4b3bf023a1efb533b013bf709d695ea0d4c52ac3a17be5fd7a4e816,PodSandboxId:85b1ea0fd72f3a54468b4b5971b1a2f8342dab06f704689d889b69b2fda02d90,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713571273928126305,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-g8dbz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 6faf5229-91df-43d8-9dc0-e15e7d5d5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: b8c9b944,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f609f34145ab572adcef26b266701098292d54c4f0a572f46fa68f71682bac16,PodSandboxId:66d2abdb62b3c1be32c1f751ede49d5faf0bcae3e0eeb37c7f8767580fe35796,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713571180822102530,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9gbc6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 7a9b92ac-2ebd-421d-bbb6-1554362125aa,},Annotations:map[string]string{io.kubernetes.container.hash: 9378e5d5,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8457d0a5f0677498881a6f8ed64886b7b7a9f17340fede9e63acdd1466ef980,PodSandboxId:0561fc45fce8405e200d4a51d97d2548cd2115660767668d08baff5c28632779,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17135
71147694080648,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-s6wnr,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 63506f40-47b2-404e-bcd0-27cca6d4d119,},Annotations:map[string]string{io.kubernetes.container.hash: 30628b4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db50b237e66bf4f0646b0b5b8ecdd4e9efc10ec22d644f2f5be65ad98a75a58,PodSandboxId:40599943e203fb70ba9515031a437b67958cba7500102f14e9cdcf3b5d43aa18,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_EXITED,CreatedAt:1713571142763194284,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-msq6m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f348eb7-76f5-4a36-ad8b-50129a6f3ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7cf829,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e4688c82da65059d9a08a942a8551baa3a5acdc3f429353f2be3869643e4d,PodSandboxId:1276d1f0cc15aedbc60131df62fc93f4d3398dccd0ed85722a91f0a51801c072,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713571103582498012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caace344-c304-4889-a0a2-41479039397a,},Annotations:map[string]string{io.kubernetes.container.hash: b4444dce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8,PodSandboxId:7e23cc76801eeee57244c4784caa87ddc1a3d0205075ec542b364d1197ce169a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713571100282476208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tjjdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4aaa144-7e87-4738-955d-cee58d25f65f,},Annotations:map[string]string{io.kubernetes.container.hash: e0aecade,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751,PodSandb
oxId:3834c06220ceba09220f99e8974219dabbe7a0ffb4ab70a35c9426246934feb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713571097572994107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v7nxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33a980c-c758-4488-86c4-3a4bc3c54cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 1098c3b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd,PodSandboxId:f2d1b81b05076a763a243ac4c2f1
6a165645f6c9871e506e0ad7a5d40771b925,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713571078196105180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 122d99b3b4eb697aeba820b61e795f94,},Annotations:map[string]string{io.kubernetes.container.hash: 19040de4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf,PodSandboxId:ef417ed11323d70760c1e54fc77a896bf89bc5259ef9b0e243e2732dccd4b8d6,Metadata:&
ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713571078186187145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110af44b051f67941f6c46f65a3705d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a,PodSandboxId:2705afab2635f0b42731216bc57bb8e16a1e067af70bbfebcaa22eb04cad9572,Metadata:&ContainerMetadata
{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713571078190456406,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f3cb68d60b5a6a1e91cd34f53de8f9,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46,PodSandboxId:244d50fc56b54386c25ecd6ab2a8692c239c10531eeeef93f6e5f7356aa465e4,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713571078101240143,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f8ef1e46b514290a289b45fa916a37,},Annotations:map[string]string{io.kubernetes.container.hash: 3c32be39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97faaace-777e-4533-be5e-d482a8a03f00 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.689821336Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9db6935e-0f9f-4f7f-ac3f-8d27de45feeb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.691142403Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713571560691117329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9db6935e-0f9f-4f7f-ac3f-8d27de45feeb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.691384758Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 9f348eb7-76f5-4a36-ad8b-50129a6f3ddf,},},}" file="otel-collector/interceptors.go:62" id=fd27cc6e-342a-4266-83a1-9a4825cfb4b3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.691464260Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:40599943e203fb70ba9515031a437b67958cba7500102f14e9cdcf3b5d43aa18,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-msq6m,Uid:9f348eb7-76f5-4a36-ad8b-50129a6f3ddf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713571104046986670,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-msq6m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f348eb7-76f5-4a36-ad8b-50129a6f3ddf,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-19T23:58:23.720358632Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=fd27cc6e-342a-4266-83a1-9a4825cfb4b3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.692428465Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a9980fb-1bd0-4fe6-870d-90a5ffb78250 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.692594549Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a9980fb-1bd0-4fe6-870d-90a5ffb78250 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.692988722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:57ebb4f9b3465ac6bddcb74b05b945e17d0ff577bceb4a737d7bf7255d93186c,PodSandboxId:9d5e95805281495deff9daa26045d03027ae44c4971eb06490359c495e4f5f42,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713571424140818049,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-wkhlc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35cf4ddc-6019-4f53-9d02-615978016068,},Annotations:map[string]string{io.kubernetes.container.hash: feff1119,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3f21862f8c129979ab35d893169683ea68ead0638e90c0722fce0c73f4e82b,PodSandboxId:a613c505e45e70d66e353380844832346ffdc1172a22ccad870d484bfdd05c4d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713571282871321373,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 357d421a-b251-4370-be01-0a523ab9c08b,},Annotations:map[string]string{io.kubern
etes.container.hash: f67037d2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c26faebc4b3bf023a1efb533b013bf709d695ea0d4c52ac3a17be5fd7a4e816,PodSandboxId:85b1ea0fd72f3a54468b4b5971b1a2f8342dab06f704689d889b69b2fda02d90,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713571273928126305,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-g8dbz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 6faf5229-91df-43d8-9dc0-e15e7d5d5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: b8c9b944,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f609f34145ab572adcef26b266701098292d54c4f0a572f46fa68f71682bac16,PodSandboxId:66d2abdb62b3c1be32c1f751ede49d5faf0bcae3e0eeb37c7f8767580fe35796,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713571180822102530,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9gbc6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 7a9b92ac-2ebd-421d-bbb6-1554362125aa,},Annotations:map[string]string{io.kubernetes.container.hash: 9378e5d5,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8457d0a5f0677498881a6f8ed64886b7b7a9f17340fede9e63acdd1466ef980,PodSandboxId:0561fc45fce8405e200d4a51d97d2548cd2115660767668d08baff5c28632779,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17135
71147694080648,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-s6wnr,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 63506f40-47b2-404e-bcd0-27cca6d4d119,},Annotations:map[string]string{io.kubernetes.container.hash: 30628b4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db50b237e66bf4f0646b0b5b8ecdd4e9efc10ec22d644f2f5be65ad98a75a58,PodSandboxId:40599943e203fb70ba9515031a437b67958cba7500102f14e9cdcf3b5d43aa18,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_EXITED,CreatedAt:1713571142763194284,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-msq6m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f348eb7-76f5-4a36-ad8b-50129a6f3ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7cf829,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e4688c82da65059d9a08a942a8551baa3a5acdc3f429353f2be3869643e4d,PodSandboxId:1276d1f0cc15aedbc60131df62fc93f4d3398dccd0ed85722a91f0a51801c072,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713571103582498012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caace344-c304-4889-a0a2-41479039397a,},Annotations:map[string]string{io.kubernetes.container.hash: b4444dce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8,PodSandboxId:7e23cc76801eeee57244c4784caa87ddc1a3d0205075ec542b364d1197ce169a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713571100282476208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tjjdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4aaa144-7e87-4738-955d-cee58d25f65f,},Annotations:map[string]string{io.kubernetes.container.hash: e0aecade,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751,PodSandb
oxId:3834c06220ceba09220f99e8974219dabbe7a0ffb4ab70a35c9426246934feb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713571097572994107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v7nxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33a980c-c758-4488-86c4-3a4bc3c54cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 1098c3b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd,PodSandboxId:f2d1b81b05076a763a243ac4c2f1
6a165645f6c9871e506e0ad7a5d40771b925,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713571078196105180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 122d99b3b4eb697aeba820b61e795f94,},Annotations:map[string]string{io.kubernetes.container.hash: 19040de4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf,PodSandboxId:ef417ed11323d70760c1e54fc77a896bf89bc5259ef9b0e243e2732dccd4b8d6,Metadata:&
ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713571078186187145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110af44b051f67941f6c46f65a3705d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a,PodSandboxId:2705afab2635f0b42731216bc57bb8e16a1e067af70bbfebcaa22eb04cad9572,Metadata:&ContainerMetadata
{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713571078190456406,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f3cb68d60b5a6a1e91cd34f53de8f9,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46,PodSandboxId:244d50fc56b54386c25ecd6ab2a8692c239c10531eeeef93f6e5f7356aa465e4,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713571078101240143,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-903502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f8ef1e46b514290a289b45fa916a37,},Annotations:map[string]string{io.kubernetes.container.hash: 3c32be39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a9980fb-1bd0-4fe6-870d-90a5ffb78250 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.694414366Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:40599943e203fb70ba9515031a437b67958cba7500102f14e9cdcf3b5d43aa18,Verbose:false,}" file="otel-collector/interceptors.go:62" id=e81f5a36-75a7-4318-b3ef-170f25ce2e28 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.695000808Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:40599943e203fb70ba9515031a437b67958cba7500102f14e9cdcf3b5d43aa18,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-msq6m,Uid:9f348eb7-76f5-4a36-ad8b-50129a6f3ddf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713571104046986670,Network:&PodSandboxNetworkStatus{Ip:10.244.0.10,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-msq6m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f348eb7-76f5-4a36-ad8b-50129a6f3ddf,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-19T23:58:23.720358632Z,kubernetes.io/config.so
urce: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=e81f5a36-75a7-4318-b3ef-170f25ce2e28 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.696464859Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 9f348eb7-76f5-4a36-ad8b-50129a6f3ddf,},},}" file="otel-collector/interceptors.go:62" id=31f70819-3c6b-4600-9263-815ecce16d55 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.696592552Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31f70819-3c6b-4600-9263-815ecce16d55 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.696668439Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5db50b237e66bf4f0646b0b5b8ecdd4e9efc10ec22d644f2f5be65ad98a75a58,PodSandboxId:40599943e203fb70ba9515031a437b67958cba7500102f14e9cdcf3b5d43aa18,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_EXITED,CreatedAt:1713571142763194284,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-msq6m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f348eb7-76f5-4a36-ad8b-50129a6f3ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7cf829,io.kubern
etes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31f70819-3c6b-4600-9263-815ecce16d55 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.697184217Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:5db50b237e66bf4f0646b0b5b8ecdd4e9efc10ec22d644f2f5be65ad98a75a58,Verbose:false,}" file="otel-collector/interceptors.go:62" id=7f61650f-70e5-4bfb-a821-f44f1670679f name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 20 00:06:00 addons-903502 crio[687]: time="2024-04-20 00:06:00.697335163Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:5db50b237e66bf4f0646b0b5b8ecdd4e9efc10ec22d644f2f5be65ad98a75a58,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},State:CONTAINER_EXITED,CreatedAt:1713571142821052459,StartedAt:1713571142848436084,FinishedAt:1713571560604389566,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,Reason:Completed,Message:,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-msq6m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f348eb7-76f5-4a36-ad8b-50129a6f3ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7cf829
,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/var/lib/kubelet/pods/9f348eb7-76f5-4a36-ad8b-50129a6f3ddf/volumes/kubernetes.io~empty-dir/tmp-dir,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/9f348eb7-76f5-4a36-ad8b-50129a6f3ddf/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/9f348eb7-76f5-4a36-ad8b-50129a6f3ddf/containers/metrics-server/7e57c788,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_P
RIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/9f348eb7-76f5-4a36-ad8b-50129a6f3ddf/volumes/kubernetes.io~projected/kube-api-access-gdxrc,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_metrics-server-c59844bb4-msq6m_9f348eb7-76f5-4a36-ad8b-50129a6f3ddf/metrics-server/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:948,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7f61650f-70e5-4bfb-a821-f44f1670679f name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	57ebb4f9b3465       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 2 minutes ago       Running             hello-world-app           0                   9d5e958052814       hello-world-app-86c47465fc-wkhlc
	bc3f21862f8c1       docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9                         4 minutes ago       Running             nginx                     0                   a613c505e45e7       nginx
	3c26faebc4b3b       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                   4 minutes ago       Running             headlamp                  0                   85b1ea0fd72f3       headlamp-7559bf459f-g8dbz
	f609f34145ab5       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            6 minutes ago       Running             gcp-auth                  0                   66d2abdb62b3c       gcp-auth-5db96cd9b4-9gbc6
	d8457d0a5f067       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         6 minutes ago       Running             yakd                      0                   0561fc45fce84       yakd-dashboard-5ddbf7d777-s6wnr
	5db50b237e66b       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Exited              metrics-server            0                   40599943e203f       metrics-server-c59844bb4-msq6m
	aa3e4688c82da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   1276d1f0cc15a       storage-provisioner
	7e808ce1f4a89       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   7e23cc76801ee       coredns-7db6d8ff4d-tjjdl
	83fd02b669c84       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                        7 minutes ago       Running             kube-proxy                0                   3834c06220ceb       kube-proxy-v7nxm
	274269df8c392       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   f2d1b81b05076       etcd-addons-903502
	4e6d95f01f0cc       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                        8 minutes ago       Running             kube-controller-manager   0                   2705afab2635f       kube-controller-manager-addons-903502
	f9f32e140359d       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                        8 minutes ago       Running             kube-scheduler            0                   ef417ed11323d       kube-scheduler-addons-903502
	330349efd9863       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                        8 minutes ago       Running             kube-apiserver            0                   244d50fc56b54       kube-apiserver-addons-903502
	
	
	==> coredns [7e808ce1f4a89aa1bc315657da521d6cb2fa6e4cdd45031cf1336f635ba0c2c8] <==
	[INFO] 10.244.0.21:59358 - 7515 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000112478s
	[INFO] 10.244.0.21:55292 - 14168 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000138568s
	[INFO] 10.244.0.21:55292 - 41474 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000117015s
	[INFO] 10.244.0.21:59358 - 55490 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00021586s
	[INFO] 10.244.0.21:55292 - 4027 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073605s
	[INFO] 10.244.0.21:55292 - 59352 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000109523s
	[INFO] 10.244.0.21:59358 - 59637 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00007701s
	[INFO] 10.244.0.21:59358 - 51731 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064641s
	[INFO] 10.244.0.21:59358 - 5248 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000035522s
	[INFO] 10.244.0.21:59358 - 8218 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000022199s
	[INFO] 10.244.0.21:59358 - 32557 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000119549s
	[INFO] 10.244.0.21:39941 - 5877 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000090282s
	[INFO] 10.244.0.21:53271 - 34132 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00005708s
	[INFO] 10.244.0.21:53271 - 38080 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000064765s
	[INFO] 10.244.0.21:39941 - 6789 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042858s
	[INFO] 10.244.0.21:39941 - 39462 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051825s
	[INFO] 10.244.0.21:53271 - 14114 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035513s
	[INFO] 10.244.0.21:53271 - 63402 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000056631s
	[INFO] 10.244.0.21:39941 - 43697 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040294s
	[INFO] 10.244.0.21:39941 - 22299 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033932s
	[INFO] 10.244.0.21:53271 - 47411 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043473s
	[INFO] 10.244.0.21:39941 - 12862 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039329s
	[INFO] 10.244.0.21:53271 - 53853 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030856s
	[INFO] 10.244.0.21:39941 - 47299 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000401086s
	[INFO] 10.244.0.21:53271 - 64712 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000036631s
	
	
	==> describe nodes <==
	Name:               addons-903502
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-903502
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=addons-903502
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_19T23_58_03_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-903502
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 23:58:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-903502
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:05:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:04:10 +0000   Fri, 19 Apr 2024 23:57:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:04:10 +0000   Fri, 19 Apr 2024 23:57:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:04:10 +0000   Fri, 19 Apr 2024 23:57:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:04:10 +0000   Fri, 19 Apr 2024 23:58:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    addons-903502
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 55198a5b1c754d1096d6da668a60272d
	  System UUID:                55198a5b-1c75-4d10-96d6-da668a60272d
	  Boot ID:                    62e9bbf8-4361-43d8-8ce0-a64c7b22127d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-wkhlc         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  gcp-auth                    gcp-auth-5db96cd9b4-9gbc6                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  headlamp                    headlamp-7559bf459f-g8dbz                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 coredns-7db6d8ff4d-tjjdl                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m45s
	  kube-system                 etcd-addons-903502                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m58s
	  kube-system                 kube-apiserver-addons-903502             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	  kube-system                 kube-controller-manager-addons-903502    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	  kube-system                 kube-proxy-v7nxm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-scheduler-addons-903502             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m39s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-s6wnr          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     7m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m42s  kube-proxy       
	  Normal  Starting                 7m58s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m58s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m58s  kubelet          Node addons-903502 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m58s  kubelet          Node addons-903502 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m58s  kubelet          Node addons-903502 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m57s  kubelet          Node addons-903502 status is now: NodeReady
	  Normal  RegisteredNode           7m46s  node-controller  Node addons-903502 event: Registered Node addons-903502 in Controller
	
	
	==> dmesg <==
	[  +8.319846] systemd-fstab-generator[1491]: Ignoring "noauto" option for root device
	[  +5.510517] kauditd_printk_skb: 108 callbacks suppressed
	[  +5.065068] kauditd_printk_skb: 120 callbacks suppressed
	[  +5.379025] kauditd_printk_skb: 99 callbacks suppressed
	[ +21.827827] kauditd_printk_skb: 9 callbacks suppressed
	[Apr19 23:59] kauditd_printk_skb: 30 callbacks suppressed
	[ +10.385889] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.067849] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.195877] kauditd_printk_skb: 63 callbacks suppressed
	[  +6.175336] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.305392] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.904694] kauditd_printk_skb: 31 callbacks suppressed
	[Apr20 00:00] kauditd_printk_skb: 24 callbacks suppressed
	[ +19.533955] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.189134] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.005053] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.237675] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.732277] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.091957] kauditd_printk_skb: 41 callbacks suppressed
	[Apr20 00:01] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.942783] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.784284] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.928818] kauditd_printk_skb: 27 callbacks suppressed
	[Apr20 00:03] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.528772] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [274269df8c392f7fe578e7b9c02a576c97d9708067ee4dffc0ea534806e854cd] <==
	{"level":"warn","ts":"2024-04-19T23:59:19.554692Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-19T23:59:19.052813Z","time spent":"501.823208ms","remote":"127.0.0.1:41242","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4435,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dnt6r\" mod_revision:1011 > success:<request_put:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dnt6r\" value_size:4363 >> failure:<request_range:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dnt6r\" > >"}
	{"level":"warn","ts":"2024-04-19T23:59:19.554984Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"294.150614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14646"}
	{"level":"info","ts":"2024-04-19T23:59:19.555037Z","caller":"traceutil/trace.go:171","msg":"trace[802504494] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1016; }","duration":"294.227431ms","start":"2024-04-19T23:59:19.260801Z","end":"2024-04-19T23:59:19.555028Z","steps":["trace[802504494] 'agreement among raft nodes before linearized reading'  (duration: 294.096268ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-19T23:59:19.555457Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.874886ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85388"}
	{"level":"info","ts":"2024-04-19T23:59:19.555593Z","caller":"traceutil/trace.go:171","msg":"trace[1945804714] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1016; }","duration":"117.104111ms","start":"2024-04-19T23:59:19.438481Z","end":"2024-04-19T23:59:19.555585Z","steps":["trace[1945804714] 'agreement among raft nodes before linearized reading'  (duration: 116.881244ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-19T23:59:19.556011Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.266209ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11161"}
	{"level":"info","ts":"2024-04-19T23:59:19.556443Z","caller":"traceutil/trace.go:171","msg":"trace[1101703132] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1016; }","duration":"185.720812ms","start":"2024-04-19T23:59:19.370712Z","end":"2024-04-19T23:59:19.556433Z","steps":["trace[1101703132] 'agreement among raft nodes before linearized reading'  (duration: 184.279416ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-19T23:59:34.859222Z","caller":"traceutil/trace.go:171","msg":"trace[1624491735] transaction","detail":"{read_only:false; response_revision:1136; number_of_response:1; }","duration":"135.293349ms","start":"2024-04-19T23:59:34.72391Z","end":"2024-04-19T23:59:34.859204Z","steps":["trace[1624491735] 'process raft request'  (duration: 134.824606ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-19T23:59:38.20733Z","caller":"traceutil/trace.go:171","msg":"trace[1231477436] linearizableReadLoop","detail":"{readStateIndex:1172; appliedIndex:1171; }","duration":"338.615765ms","start":"2024-04-19T23:59:37.8687Z","end":"2024-04-19T23:59:38.207316Z","steps":["trace[1231477436] 'read index received'  (duration: 338.362116ms)","trace[1231477436] 'applied index is now lower than readState.Index'  (duration: 253.049µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-19T23:59:38.207702Z","caller":"traceutil/trace.go:171","msg":"trace[29911588] transaction","detail":"{read_only:false; response_revision:1141; number_of_response:1; }","duration":"416.688652ms","start":"2024-04-19T23:59:37.790999Z","end":"2024-04-19T23:59:38.207688Z","steps":["trace[29911588] 'process raft request'  (duration: 416.102594ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-19T23:59:38.208858Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-19T23:59:37.790986Z","time spent":"417.816879ms","remote":"127.0.0.1:41220","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1140 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-19T23:59:38.207863Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"339.147147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11447"}
	{"level":"warn","ts":"2024-04-19T23:59:38.208668Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.913392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-msq6m\" ","response":"range_response_count:1 size:4459"}
	{"level":"info","ts":"2024-04-19T23:59:38.209684Z","caller":"traceutil/trace.go:171","msg":"trace[858521124] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-msq6m; range_end:; response_count:1; response_revision:1141; }","duration":"254.953183ms","start":"2024-04-19T23:59:37.954721Z","end":"2024-04-19T23:59:38.209674Z","steps":["trace[858521124] 'agreement among raft nodes before linearized reading'  (duration: 253.729065ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-19T23:59:38.209818Z","caller":"traceutil/trace.go:171","msg":"trace[1564542094] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1141; }","duration":"341.13803ms","start":"2024-04-19T23:59:37.868675Z","end":"2024-04-19T23:59:38.209813Z","steps":["trace[1564542094] 'agreement among raft nodes before linearized reading'  (duration: 339.059766ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-19T23:59:38.20984Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-19T23:59:37.868663Z","time spent":"341.168089ms","remote":"127.0.0.1:41242","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11469,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-04-19T23:59:38.208727Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.498902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-04-19T23:59:38.209906Z","caller":"traceutil/trace.go:171","msg":"trace[191841844] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1141; }","duration":"253.678081ms","start":"2024-04-19T23:59:37.956223Z","end":"2024-04-19T23:59:38.209902Z","steps":["trace[191841844] 'agreement among raft nodes before linearized reading'  (duration: 252.468045ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-19T23:59:43.059944Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.319466ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-msq6m\" ","response":"range_response_count:1 size:4459"}
	{"level":"info","ts":"2024-04-19T23:59:43.06006Z","caller":"traceutil/trace.go:171","msg":"trace[2128476879] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-msq6m; range_end:; response_count:1; response_revision:1182; }","duration":"105.565071ms","start":"2024-04-19T23:59:42.954483Z","end":"2024-04-19T23:59:43.060048Z","steps":["trace[2128476879] 'range keys from in-memory index tree'  (duration: 105.189716ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:00:46.434089Z","caller":"traceutil/trace.go:171","msg":"trace[182930177] transaction","detail":"{read_only:false; response_revision:1436; number_of_response:1; }","duration":"139.13784ms","start":"2024-04-20T00:00:46.294917Z","end":"2024-04-20T00:00:46.434054Z","steps":["trace[182930177] 'process raft request'  (duration: 139.005381ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:01:13.613903Z","caller":"traceutil/trace.go:171","msg":"trace[245126542] linearizableReadLoop","detail":"{readStateIndex:1660; appliedIndex:1659; }","duration":"172.188177ms","start":"2024-04-20T00:01:13.441676Z","end":"2024-04-20T00:01:13.613864Z","steps":["trace[245126542] 'read index received'  (duration: 172.07032ms)","trace[245126542] 'applied index is now lower than readState.Index'  (duration: 117.035µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-20T00:01:13.614215Z","caller":"traceutil/trace.go:171","msg":"trace[2116660004] transaction","detail":"{read_only:false; response_revision:1598; number_of_response:1; }","duration":"179.517516ms","start":"2024-04-20T00:01:13.434681Z","end":"2024-04-20T00:01:13.614199Z","steps":["trace[2116660004] 'process raft request'  (duration: 179.103168ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:01:13.614446Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.701396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3964"}
	{"level":"info","ts":"2024-04-20T00:01:13.614571Z","caller":"traceutil/trace.go:171","msg":"trace[2014175847] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1598; }","duration":"172.911378ms","start":"2024-04-20T00:01:13.441643Z","end":"2024-04-20T00:01:13.614555Z","steps":["trace[2014175847] 'agreement among raft nodes before linearized reading'  (duration: 172.67536ms)"],"step_count":1}
	
	
	==> gcp-auth [f609f34145ab572adcef26b266701098292d54c4f0a572f46fa68f71682bac16] <==
	2024/04/20 00:00:28 Ready to write response ...
	2024/04/20 00:00:28 Ready to marshal response ...
	2024/04/20 00:00:28 Ready to write response ...
	2024/04/20 00:00:35 Ready to marshal response ...
	2024/04/20 00:00:35 Ready to write response ...
	2024/04/20 00:00:38 Ready to marshal response ...
	2024/04/20 00:00:38 Ready to write response ...
	2024/04/20 00:00:42 Ready to marshal response ...
	2024/04/20 00:00:42 Ready to write response ...
	2024/04/20 00:00:50 Ready to marshal response ...
	2024/04/20 00:00:50 Ready to write response ...
	2024/04/20 00:00:55 Ready to marshal response ...
	2024/04/20 00:00:55 Ready to write response ...
	2024/04/20 00:01:07 Ready to marshal response ...
	2024/04/20 00:01:07 Ready to write response ...
	2024/04/20 00:01:07 Ready to marshal response ...
	2024/04/20 00:01:07 Ready to write response ...
	2024/04/20 00:01:07 Ready to marshal response ...
	2024/04/20 00:01:07 Ready to write response ...
	2024/04/20 00:01:16 Ready to marshal response ...
	2024/04/20 00:01:16 Ready to write response ...
	2024/04/20 00:01:19 Ready to marshal response ...
	2024/04/20 00:01:19 Ready to write response ...
	2024/04/20 00:03:41 Ready to marshal response ...
	2024/04/20 00:03:41 Ready to write response ...
	
	
	==> kernel <==
	 00:06:01 up 8 min,  0 users,  load average: 0.51, 1.39, 0.97
	Linux addons-903502 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [330349efd9863c7643d5c55334ed54d368fd7c688958d8b7b3f0383bc7e41b46] <==
	E0420 00:00:14.137069       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	E0420 00:00:51.706659       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0420 00:00:52.778480       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:60240: use of closed network connection
	I0420 00:00:55.227909       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0420 00:00:56.838442       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:60954: use of closed network connection
	I0420 00:01:07.356963       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.110.233"}
	I0420 00:01:19.724278       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0420 00:01:19.918422       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.4.224"}
	I0420 00:01:24.975394       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0420 00:01:26.044946       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0420 00:01:32.385130       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:01:32.385580       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0420 00:01:32.423825       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:01:32.424686       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0420 00:01:32.424888       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:01:32.454702       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:01:32.455639       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0420 00:01:32.459270       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:01:32.459342       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0420 00:01:33.429402       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0420 00:01:33.460254       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0420 00:01:33.492740       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0420 00:03:41.376620       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.190.149"}
	E0420 00:03:44.681363       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0420 00:03:45.011815       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [4e6d95f01f0cc97d8bd433787930e5f1708eb30a8c6c84e7b69c9175d5000e8a] <==
	E0420 00:03:51.858883       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0420 00:03:54.920213       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0420 00:03:56.344809       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:03:56.344939       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:04:01.769114       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:04:01.769248       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:04:18.310270       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:04:18.310359       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:04:26.744690       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:04:26.744825       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:04:35.272726       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:04:35.272766       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:04:42.710776       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:04:42.710832       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:04:54.875154       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:04:54.875234       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:05:07.013894       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:05:07.014030       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:05:27.015430       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:05:27.015685       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:05:31.056672       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:05:31.056734       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:05:36.535159       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:05:36.535246       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0420 00:05:59.485811       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="13.346µs"
	
	
	==> kube-proxy [83fd02b669c84132a3e1e369fb07f5330b677e8b52ccf832d85de4134983d751] <==
	I0419 23:58:18.676567       1 server_linux.go:69] "Using iptables proxy"
	I0419 23:58:18.726017       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.36"]
	I0419 23:58:18.915332       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 23:58:18.915433       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 23:58:18.915451       1 server_linux.go:165] "Using iptables Proxier"
	I0419 23:58:18.923684       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 23:58:18.923879       1 server.go:872] "Version info" version="v1.30.0"
	I0419 23:58:18.923918       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 23:58:18.925192       1 config.go:192] "Starting service config controller"
	I0419 23:58:18.925236       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 23:58:18.925254       1 config.go:101] "Starting endpoint slice config controller"
	I0419 23:58:18.925258       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 23:58:18.925707       1 config.go:319] "Starting node config controller"
	I0419 23:58:18.925742       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 23:58:19.029614       1 shared_informer.go:320] Caches are synced for node config
	I0419 23:58:19.029695       1 shared_informer.go:320] Caches are synced for service config
	I0419 23:58:19.029723       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f9f32e140359d7d54344ba79483240482e1f94c1b9b418dfdde5deb406f8f6bf] <==
	W0419 23:58:00.654989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0419 23:58:00.655025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0419 23:58:00.655036       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0419 23:58:00.655044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0419 23:58:00.655053       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0419 23:58:00.655060       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0419 23:58:01.464602       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0419 23:58:01.468397       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0419 23:58:01.518796       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0419 23:58:01.520639       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0419 23:58:01.595607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0419 23:58:01.595764       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0419 23:58:01.597827       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0419 23:58:01.597972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0419 23:58:01.621364       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0419 23:58:01.621468       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0419 23:58:01.628015       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0419 23:58:01.628275       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0419 23:58:01.652810       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0419 23:58:01.652908       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0419 23:58:01.736455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0419 23:58:01.736682       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0419 23:58:01.818153       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0419 23:58:01.818208       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 23:58:03.822095       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 20 00:03:48 addons-903502 kubelet[1277]: I0420 00:03:48.156621    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e9d29b-3218-41d5-ad2a-c451c4fff701-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "14e9d29b-3218-41d5-ad2a-c451c4fff701" (UID: "14e9d29b-3218-41d5-ad2a-c451c4fff701"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 20 00:03:48 addons-903502 kubelet[1277]: I0420 00:03:48.248772    1277 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14e9d29b-3218-41d5-ad2a-c451c4fff701-webhook-cert\") on node \"addons-903502\" DevicePath \"\""
	Apr 20 00:03:48 addons-903502 kubelet[1277]: I0420 00:03:48.248806    1277 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-scsfn\" (UniqueName: \"kubernetes.io/projected/14e9d29b-3218-41d5-ad2a-c451c4fff701-kube-api-access-scsfn\") on node \"addons-903502\" DevicePath \"\""
	Apr 20 00:03:49 addons-903502 kubelet[1277]: I0420 00:03:49.097665    1277 scope.go:117] "RemoveContainer" containerID="7689c374b8a7680017b8f0b0fce59b0537cc4f154b19105f356e1b6250c24868"
	Apr 20 00:03:49 addons-903502 kubelet[1277]: I0420 00:03:49.137136    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14e9d29b-3218-41d5-ad2a-c451c4fff701" path="/var/lib/kubelet/pods/14e9d29b-3218-41d5-ad2a-c451c4fff701/volumes"
	Apr 20 00:04:03 addons-903502 kubelet[1277]: E0420 00:04:03.156188    1277 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:04:03 addons-903502 kubelet[1277]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:04:03 addons-903502 kubelet[1277]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:04:03 addons-903502 kubelet[1277]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:04:03 addons-903502 kubelet[1277]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:04:03 addons-903502 kubelet[1277]: I0420 00:04:03.814758    1277 scope.go:117] "RemoveContainer" containerID="1de05ed681bb6a57b83ccd6c4210b3dd862c2b02ffcc1ce2fe4359c024bab23e"
	Apr 20 00:04:03 addons-903502 kubelet[1277]: I0420 00:04:03.844184    1277 scope.go:117] "RemoveContainer" containerID="91c17cbbe53f6a2c6b01ab722c9ae75e6c5f54addf820a6635079d21e7009d46"
	Apr 20 00:04:03 addons-903502 kubelet[1277]: I0420 00:04:03.858299    1277 scope.go:117] "RemoveContainer" containerID="27e4dea68045f62bdbd048b95dcca8d0dbc1469b849a34db2b69deba0e26c0fd"
	Apr 20 00:05:03 addons-903502 kubelet[1277]: E0420 00:05:03.159342    1277 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:05:03 addons-903502 kubelet[1277]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:05:03 addons-903502 kubelet[1277]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:05:03 addons-903502 kubelet[1277]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:05:03 addons-903502 kubelet[1277]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:05:59 addons-903502 kubelet[1277]: I0420 00:05:59.509849    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-wkhlc" podStartSLOduration=136.24829854 podStartE2EDuration="2m18.509814767s" podCreationTimestamp="2024-04-20 00:03:41 +0000 UTC" firstStartedPulling="2024-04-20 00:03:41.853916944 +0000 UTC m=+338.882610829" lastFinishedPulling="2024-04-20 00:03:44.115433174 +0000 UTC m=+341.144127056" observedRunningTime="2024-04-20 00:03:45.069090118 +0000 UTC m=+342.097784021" watchObservedRunningTime="2024-04-20 00:05:59.509814767 +0000 UTC m=+476.538508666"
	Apr 20 00:06:00 addons-903502 kubelet[1277]: I0420 00:06:00.964254    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f348eb7-76f5-4a36-ad8b-50129a6f3ddf-tmp-dir\") pod \"9f348eb7-76f5-4a36-ad8b-50129a6f3ddf\" (UID: \"9f348eb7-76f5-4a36-ad8b-50129a6f3ddf\") "
	Apr 20 00:06:00 addons-903502 kubelet[1277]: I0420 00:06:00.964328    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdxrc\" (UniqueName: \"kubernetes.io/projected/9f348eb7-76f5-4a36-ad8b-50129a6f3ddf-kube-api-access-gdxrc\") pod \"9f348eb7-76f5-4a36-ad8b-50129a6f3ddf\" (UID: \"9f348eb7-76f5-4a36-ad8b-50129a6f3ddf\") "
	Apr 20 00:06:00 addons-903502 kubelet[1277]: I0420 00:06:00.964911    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f348eb7-76f5-4a36-ad8b-50129a6f3ddf-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "9f348eb7-76f5-4a36-ad8b-50129a6f3ddf" (UID: "9f348eb7-76f5-4a36-ad8b-50129a6f3ddf"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Apr 20 00:06:00 addons-903502 kubelet[1277]: I0420 00:06:00.971754    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f348eb7-76f5-4a36-ad8b-50129a6f3ddf-kube-api-access-gdxrc" (OuterVolumeSpecName: "kube-api-access-gdxrc") pod "9f348eb7-76f5-4a36-ad8b-50129a6f3ddf" (UID: "9f348eb7-76f5-4a36-ad8b-50129a6f3ddf"). InnerVolumeSpecName "kube-api-access-gdxrc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 20 00:06:01 addons-903502 kubelet[1277]: I0420 00:06:01.065365    1277 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f348eb7-76f5-4a36-ad8b-50129a6f3ddf-tmp-dir\") on node \"addons-903502\" DevicePath \"\""
	Apr 20 00:06:01 addons-903502 kubelet[1277]: I0420 00:06:01.065423    1277 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gdxrc\" (UniqueName: \"kubernetes.io/projected/9f348eb7-76f5-4a36-ad8b-50129a6f3ddf-kube-api-access-gdxrc\") on node \"addons-903502\" DevicePath \"\""
	
	
	==> storage-provisioner [aa3e4688c82da65059d9a08a942a8551baa3a5acdc3f429353f2be3869643e4d] <==
	I0419 23:58:24.462056       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0419 23:58:24.484235       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0419 23:58:24.484288       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0419 23:58:24.500693       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0419 23:58:24.500871       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-903502_e145b11b-fa71-4df4-9ff9-2a0986c3c296!
	I0419 23:58:24.502760       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4e6e0519-d9e7-4876-adec-39d19fbe23a7", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-903502_e145b11b-fa71-4df4-9ff9-2a0986c3c296 became leader
	I0419 23:58:24.601767       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-903502_e145b11b-fa71-4df4-9ff9-2a0986c3c296!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-903502 -n addons-903502
helpers_test.go:261: (dbg) Run:  kubectl --context addons-903502 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (322.50s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-903502
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-903502: exit status 82 (2m0.486124774s)

                                                
                                                
-- stdout --
	* Stopping node "addons-903502"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-903502" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-903502
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-903502: exit status 11 (21.664145247s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-903502" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-903502
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-903502: exit status 11 (6.143420506s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-903502" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-903502
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-903502: exit status 11 (6.145989932s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-903502" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 image save gcr.io/google-containers/addon-resizer:functional-238176 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-238176 image save gcr.io/google-containers/addon-resizer:functional-238176 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.593590429s)
functional_test.go:385: expected "/home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0420 00:13:49.351997   93598 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:13:49.352096   93598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:13:49.352104   93598 out.go:304] Setting ErrFile to fd 2...
	I0420 00:13:49.352109   93598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:13:49.352271   93598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:13:49.352778   93598 config.go:182] Loaded profile config "functional-238176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:13:49.352872   93598 config.go:182] Loaded profile config "functional-238176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:13:49.353246   93598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:13:49.353288   93598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:13:49.367801   93598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45507
	I0420 00:13:49.368338   93598 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:13:49.368974   93598 main.go:141] libmachine: Using API Version  1
	I0420 00:13:49.368998   93598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:13:49.369435   93598 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:13:49.369668   93598 main.go:141] libmachine: (functional-238176) Calling .GetState
	I0420 00:13:49.371429   93598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:13:49.371477   93598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:13:49.385441   93598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I0420 00:13:49.385963   93598 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:13:49.386466   93598 main.go:141] libmachine: Using API Version  1
	I0420 00:13:49.386484   93598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:13:49.386835   93598 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:13:49.387016   93598 main.go:141] libmachine: (functional-238176) Calling .DriverName
	I0420 00:13:49.387326   93598 ssh_runner.go:195] Run: systemctl --version
	I0420 00:13:49.387351   93598 main.go:141] libmachine: (functional-238176) Calling .GetSSHHostname
	I0420 00:13:49.390048   93598 main.go:141] libmachine: (functional-238176) DBG | domain functional-238176 has defined MAC address 52:54:00:87:cf:b4 in network mk-functional-238176
	I0420 00:13:49.390500   93598 main.go:141] libmachine: (functional-238176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:cf:b4", ip: ""} in network mk-functional-238176: {Iface:virbr1 ExpiryTime:2024-04-20 01:10:07 +0000 UTC Type:0 Mac:52:54:00:87:cf:b4 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:functional-238176 Clientid:01:52:54:00:87:cf:b4}
	I0420 00:13:49.390535   93598 main.go:141] libmachine: (functional-238176) DBG | domain functional-238176 has defined IP address 192.168.39.100 and MAC address 52:54:00:87:cf:b4 in network mk-functional-238176
	I0420 00:13:49.390685   93598 main.go:141] libmachine: (functional-238176) Calling .GetSSHPort
	I0420 00:13:49.390854   93598 main.go:141] libmachine: (functional-238176) Calling .GetSSHKeyPath
	I0420 00:13:49.391021   93598 main.go:141] libmachine: (functional-238176) Calling .GetSSHUsername
	I0420 00:13:49.391187   93598 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/functional-238176/id_rsa Username:docker}
	I0420 00:13:49.473682   93598 cache_images.go:286] Loading image from: /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar
	W0420 00:13:49.473781   93598 cache_images.go:254] Failed to load cached images for profile functional-238176. make sure the profile is running. loading images: stat /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar: no such file or directory
	I0420 00:13:49.473810   93598 cache_images.go:262] succeeded pushing to: 
	I0420 00:13:49.473817   93598 cache_images.go:263] failed pushing to: functional-238176
	I0420 00:13:49.473844   93598 main.go:141] libmachine: Making call to close driver server
	I0420 00:13:49.473861   93598 main.go:141] libmachine: (functional-238176) Calling .Close
	I0420 00:13:49.474132   93598 main.go:141] libmachine: Successfully made call to close driver server
	I0420 00:13:49.474148   93598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 00:13:49.474160   93598 main.go:141] libmachine: Making call to close driver server
	I0420 00:13:49.474166   93598 main.go:141] libmachine: (functional-238176) Calling .Close
	I0420 00:13:49.474470   93598 main.go:141] libmachine: Successfully made call to close driver server
	I0420 00:13:49.474470   93598 main.go:141] libmachine: (functional-238176) DBG | Closing plugin on server side
	I0420 00:13:49.474487   93598 main.go:141] libmachine: Making call to close connection to plugin binary

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 node stop m02 -v=7 --alsologtostderr
E0420 00:18:52.618607   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:19:33.578826   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:20:27.814617   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-371738 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.495677422s)

                                                
                                                
-- stdout --
	* Stopping node "ha-371738-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 00:18:43.817414   98593 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:18:43.817554   98593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:18:43.817567   98593 out.go:304] Setting ErrFile to fd 2...
	I0420 00:18:43.817573   98593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:18:43.817798   98593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:18:43.818042   98593 mustload.go:65] Loading cluster: ha-371738
	I0420 00:18:43.818426   98593 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:18:43.818446   98593 stop.go:39] StopHost: ha-371738-m02
	I0420 00:18:43.818900   98593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:18:43.818944   98593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:18:43.834154   98593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42321
	I0420 00:18:43.834692   98593 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:18:43.835293   98593 main.go:141] libmachine: Using API Version  1
	I0420 00:18:43.835318   98593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:18:43.835648   98593 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:18:43.838216   98593 out.go:177] * Stopping node "ha-371738-m02"  ...
	I0420 00:18:43.839605   98593 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0420 00:18:43.839642   98593 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:18:43.839843   98593 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0420 00:18:43.839883   98593 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:18:43.842831   98593 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:18:43.843314   98593 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:18:43.843347   98593 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:18:43.843447   98593 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:18:43.843643   98593 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:18:43.843816   98593 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:18:43.843978   98593 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa Username:docker}
	I0420 00:18:43.935835   98593 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0420 00:18:43.992108   98593 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0420 00:18:44.049810   98593 main.go:141] libmachine: Stopping "ha-371738-m02"...
	I0420 00:18:44.049846   98593 main.go:141] libmachine: (ha-371738-m02) Calling .GetState
	I0420 00:18:44.051447   98593 main.go:141] libmachine: (ha-371738-m02) Calling .Stop
	I0420 00:18:44.055102   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 0/120
	I0420 00:18:45.056705   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 1/120
	I0420 00:18:46.057984   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 2/120
	I0420 00:18:47.059931   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 3/120
	I0420 00:18:48.061261   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 4/120
	I0420 00:18:49.063587   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 5/120
	I0420 00:18:50.064950   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 6/120
	I0420 00:18:51.066632   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 7/120
	I0420 00:18:52.067857   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 8/120
	I0420 00:18:53.069692   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 9/120
	I0420 00:18:54.071950   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 10/120
	I0420 00:18:55.073359   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 11/120
	I0420 00:18:56.074528   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 12/120
	I0420 00:18:57.076003   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 13/120
	I0420 00:18:58.077516   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 14/120
	I0420 00:18:59.079548   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 15/120
	I0420 00:19:00.081835   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 16/120
	I0420 00:19:01.083776   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 17/120
	I0420 00:19:02.085180   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 18/120
	I0420 00:19:03.087213   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 19/120
	I0420 00:19:04.088989   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 20/120
	I0420 00:19:05.090450   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 21/120
	I0420 00:19:06.091927   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 22/120
	I0420 00:19:07.094457   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 23/120
	I0420 00:19:08.095679   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 24/120
	I0420 00:19:09.097732   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 25/120
	I0420 00:19:10.099072   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 26/120
	I0420 00:19:11.100209   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 27/120
	I0420 00:19:12.101634   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 28/120
	I0420 00:19:13.103825   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 29/120
	I0420 00:19:14.106028   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 30/120
	I0420 00:19:15.108214   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 31/120
	I0420 00:19:16.109747   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 32/120
	I0420 00:19:17.111817   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 33/120
	I0420 00:19:18.113372   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 34/120
	I0420 00:19:19.115202   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 35/120
	I0420 00:19:20.116534   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 36/120
	I0420 00:19:21.117924   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 37/120
	I0420 00:19:22.119783   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 38/120
	I0420 00:19:23.121055   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 39/120
	I0420 00:19:24.123158   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 40/120
	I0420 00:19:25.124961   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 41/120
	I0420 00:19:26.126398   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 42/120
	I0420 00:19:27.127871   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 43/120
	I0420 00:19:28.129185   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 44/120
	I0420 00:19:29.131283   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 45/120
	I0420 00:19:30.133086   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 46/120
	I0420 00:19:31.134477   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 47/120
	I0420 00:19:32.137074   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 48/120
	I0420 00:19:33.138409   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 49/120
	I0420 00:19:34.140747   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 50/120
	I0420 00:19:35.142137   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 51/120
	I0420 00:19:36.143825   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 52/120
	I0420 00:19:37.145188   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 53/120
	I0420 00:19:38.147511   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 54/120
	I0420 00:19:39.149454   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 55/120
	I0420 00:19:40.150738   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 56/120
	I0420 00:19:41.152072   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 57/120
	I0420 00:19:42.153170   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 58/120
	I0420 00:19:43.154379   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 59/120
	I0420 00:19:44.156412   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 60/120
	I0420 00:19:45.157946   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 61/120
	I0420 00:19:46.159969   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 62/120
	I0420 00:19:47.161056   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 63/120
	I0420 00:19:48.162471   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 64/120
	I0420 00:19:49.164342   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 65/120
	I0420 00:19:50.165814   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 66/120
	I0420 00:19:51.167905   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 67/120
	I0420 00:19:52.169689   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 68/120
	I0420 00:19:53.171930   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 69/120
	I0420 00:19:54.174076   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 70/120
	I0420 00:19:55.176167   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 71/120
	I0420 00:19:56.177533   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 72/120
	I0420 00:19:57.179768   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 73/120
	I0420 00:19:58.181129   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 74/120
	I0420 00:19:59.182495   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 75/120
	I0420 00:20:00.183690   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 76/120
	I0420 00:20:01.185007   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 77/120
	I0420 00:20:02.186322   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 78/120
	I0420 00:20:03.187776   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 79/120
	I0420 00:20:04.189781   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 80/120
	I0420 00:20:05.191808   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 81/120
	I0420 00:20:06.193127   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 82/120
	I0420 00:20:07.194306   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 83/120
	I0420 00:20:08.196311   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 84/120
	I0420 00:20:09.198063   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 85/120
	I0420 00:20:10.199796   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 86/120
	I0420 00:20:11.201649   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 87/120
	I0420 00:20:12.203060   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 88/120
	I0420 00:20:13.204792   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 89/120
	I0420 00:20:14.206671   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 90/120
	I0420 00:20:15.208658   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 91/120
	I0420 00:20:16.209998   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 92/120
	I0420 00:20:17.212041   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 93/120
	I0420 00:20:18.213277   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 94/120
	I0420 00:20:19.214888   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 95/120
	I0420 00:20:20.216184   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 96/120
	I0420 00:20:21.217868   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 97/120
	I0420 00:20:22.219022   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 98/120
	I0420 00:20:23.220664   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 99/120
	I0420 00:20:24.222777   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 100/120
	I0420 00:20:25.225136   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 101/120
	I0420 00:20:26.226398   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 102/120
	I0420 00:20:27.227800   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 103/120
	I0420 00:20:28.229031   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 104/120
	I0420 00:20:29.230958   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 105/120
	I0420 00:20:30.232600   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 106/120
	I0420 00:20:31.233939   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 107/120
	I0420 00:20:32.235397   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 108/120
	I0420 00:20:33.236767   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 109/120
	I0420 00:20:34.239027   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 110/120
	I0420 00:20:35.240109   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 111/120
	I0420 00:20:36.241730   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 112/120
	I0420 00:20:37.242917   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 113/120
	I0420 00:20:38.244265   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 114/120
	I0420 00:20:39.246117   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 115/120
	I0420 00:20:40.247534   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 116/120
	I0420 00:20:41.248917   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 117/120
	I0420 00:20:42.250182   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 118/120
	I0420 00:20:43.251625   98593 main.go:141] libmachine: (ha-371738-m02) Waiting for machine to stop 119/120
	I0420 00:20:44.252771   98593 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0420 00:20:44.253130   98593 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-371738 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr
E0420 00:20:55.501483   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr: exit status 3 (19.153312333s)

                                                
                                                
-- stdout --
	ha-371738
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-371738-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 00:20:44.314071   99037 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:20:44.314217   99037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:20:44.314229   99037 out.go:304] Setting ErrFile to fd 2...
	I0420 00:20:44.314233   99037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:20:44.314451   99037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:20:44.314669   99037 out.go:298] Setting JSON to false
	I0420 00:20:44.314696   99037 mustload.go:65] Loading cluster: ha-371738
	I0420 00:20:44.314810   99037 notify.go:220] Checking for updates...
	I0420 00:20:44.315188   99037 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:20:44.315206   99037 status.go:255] checking status of ha-371738 ...
	I0420 00:20:44.315733   99037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:20:44.315814   99037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:20:44.337544   99037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44479
	I0420 00:20:44.338016   99037 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:20:44.338723   99037 main.go:141] libmachine: Using API Version  1
	I0420 00:20:44.338766   99037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:20:44.339178   99037 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:20:44.339413   99037 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:20:44.341242   99037 status.go:330] ha-371738 host status = "Running" (err=<nil>)
	I0420 00:20:44.341268   99037 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:20:44.341572   99037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:20:44.341609   99037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:20:44.355915   99037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46629
	I0420 00:20:44.356292   99037 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:20:44.356767   99037 main.go:141] libmachine: Using API Version  1
	I0420 00:20:44.356794   99037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:20:44.357095   99037 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:20:44.357323   99037 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:20:44.360103   99037 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:20:44.360544   99037 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:20:44.360565   99037 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:20:44.360706   99037 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:20:44.361106   99037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:20:44.361161   99037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:20:44.375349   99037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43419
	I0420 00:20:44.375706   99037 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:20:44.376170   99037 main.go:141] libmachine: Using API Version  1
	I0420 00:20:44.376199   99037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:20:44.376516   99037 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:20:44.376751   99037 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:20:44.376949   99037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:20:44.376978   99037 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:20:44.379730   99037 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:20:44.380130   99037 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:20:44.380166   99037 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:20:44.380298   99037 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:20:44.380488   99037 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:20:44.380601   99037 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:20:44.380798   99037 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:20:44.475737   99037 ssh_runner.go:195] Run: systemctl --version
	I0420 00:20:44.485979   99037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:20:44.507360   99037 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:20:44.507396   99037 api_server.go:166] Checking apiserver status ...
	I0420 00:20:44.507435   99037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:20:44.525699   99037 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup
	W0420 00:20:44.536699   99037 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:20:44.536756   99037 ssh_runner.go:195] Run: ls
	I0420 00:20:44.541971   99037 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:20:44.550340   99037 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:20:44.550364   99037 status.go:422] ha-371738 apiserver status = Running (err=<nil>)
	I0420 00:20:44.550375   99037 status.go:257] ha-371738 status: &{Name:ha-371738 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:20:44.550390   99037 status.go:255] checking status of ha-371738-m02 ...
	I0420 00:20:44.550687   99037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:20:44.550724   99037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:20:44.567075   99037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42657
	I0420 00:20:44.567466   99037 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:20:44.567973   99037 main.go:141] libmachine: Using API Version  1
	I0420 00:20:44.567995   99037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:20:44.568312   99037 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:20:44.568562   99037 main.go:141] libmachine: (ha-371738-m02) Calling .GetState
	I0420 00:20:44.570375   99037 status.go:330] ha-371738-m02 host status = "Running" (err=<nil>)
	I0420 00:20:44.570394   99037 host.go:66] Checking if "ha-371738-m02" exists ...
	I0420 00:20:44.570706   99037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:20:44.570750   99037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:20:44.585646   99037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40351
	I0420 00:20:44.586031   99037 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:20:44.586516   99037 main.go:141] libmachine: Using API Version  1
	I0420 00:20:44.586552   99037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:20:44.586867   99037 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:20:44.587070   99037 main.go:141] libmachine: (ha-371738-m02) Calling .GetIP
	I0420 00:20:44.590063   99037 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:20:44.590498   99037 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:20:44.590525   99037 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:20:44.590746   99037 host.go:66] Checking if "ha-371738-m02" exists ...
	I0420 00:20:44.591087   99037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:20:44.591131   99037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:20:44.609358   99037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39667
	I0420 00:20:44.609832   99037 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:20:44.610331   99037 main.go:141] libmachine: Using API Version  1
	I0420 00:20:44.610351   99037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:20:44.610748   99037 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:20:44.610946   99037 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:20:44.611138   99037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:20:44.611159   99037 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:20:44.614129   99037 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:20:44.614491   99037 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:20:44.614515   99037 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:20:44.614664   99037 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:20:44.614821   99037 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:20:44.615010   99037 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:20:44.615210   99037 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa Username:docker}
	W0420 00:21:03.049586   99037 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.48:22: connect: no route to host
	W0420 00:21:03.049720   99037 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	E0420 00:21:03.049769   99037 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	I0420 00:21:03.049781   99037 status.go:257] ha-371738-m02 status: &{Name:ha-371738-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0420 00:21:03.049811   99037 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	I0420 00:21:03.049826   99037 status.go:255] checking status of ha-371738-m03 ...
	I0420 00:21:03.050167   99037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:03.050225   99037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:03.066356   99037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39009
	I0420 00:21:03.066849   99037 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:03.067334   99037 main.go:141] libmachine: Using API Version  1
	I0420 00:21:03.067358   99037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:03.067731   99037 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:03.067965   99037 main.go:141] libmachine: (ha-371738-m03) Calling .GetState
	I0420 00:21:03.069779   99037 status.go:330] ha-371738-m03 host status = "Running" (err=<nil>)
	I0420 00:21:03.069799   99037 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:21:03.070107   99037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:03.070157   99037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:03.084644   99037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34987
	I0420 00:21:03.085071   99037 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:03.085556   99037 main.go:141] libmachine: Using API Version  1
	I0420 00:21:03.085582   99037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:03.085895   99037 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:03.086084   99037 main.go:141] libmachine: (ha-371738-m03) Calling .GetIP
	I0420 00:21:03.088759   99037 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:03.089190   99037 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:21:03.089217   99037 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:03.089416   99037 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:21:03.089736   99037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:03.089783   99037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:03.104230   99037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44995
	I0420 00:21:03.104608   99037 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:03.105060   99037 main.go:141] libmachine: Using API Version  1
	I0420 00:21:03.105083   99037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:03.105434   99037 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:03.105676   99037 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:21:03.105890   99037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:03.105914   99037 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:21:03.108269   99037 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:03.108627   99037 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:21:03.108650   99037 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:03.108741   99037 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:21:03.108927   99037 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:21:03.109055   99037 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:21:03.109221   99037 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:21:03.191941   99037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:03.211124   99037 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:21:03.211160   99037 api_server.go:166] Checking apiserver status ...
	I0420 00:21:03.211202   99037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:21:03.226300   99037 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0420 00:21:03.235904   99037 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:21:03.235954   99037 ssh_runner.go:195] Run: ls
	I0420 00:21:03.240530   99037 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:21:03.245757   99037 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:21:03.245784   99037 status.go:422] ha-371738-m03 apiserver status = Running (err=<nil>)
	I0420 00:21:03.245808   99037 status.go:257] ha-371738-m03 status: &{Name:ha-371738-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:21:03.245827   99037 status.go:255] checking status of ha-371738-m04 ...
	I0420 00:21:03.246112   99037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:03.246158   99037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:03.262056   99037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I0420 00:21:03.262617   99037 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:03.263143   99037 main.go:141] libmachine: Using API Version  1
	I0420 00:21:03.263167   99037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:03.263466   99037 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:03.263702   99037 main.go:141] libmachine: (ha-371738-m04) Calling .GetState
	I0420 00:21:03.265134   99037 status.go:330] ha-371738-m04 host status = "Running" (err=<nil>)
	I0420 00:21:03.265151   99037 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:21:03.265461   99037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:03.265503   99037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:03.279889   99037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35497
	I0420 00:21:03.280238   99037 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:03.280647   99037 main.go:141] libmachine: Using API Version  1
	I0420 00:21:03.280665   99037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:03.280984   99037 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:03.281191   99037 main.go:141] libmachine: (ha-371738-m04) Calling .GetIP
	I0420 00:21:03.283853   99037 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:03.284277   99037 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:21:03.284313   99037 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:03.284414   99037 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:21:03.284683   99037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:03.284717   99037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:03.299973   99037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45689
	I0420 00:21:03.300300   99037 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:03.300769   99037 main.go:141] libmachine: Using API Version  1
	I0420 00:21:03.300790   99037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:03.301141   99037 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:03.301519   99037 main.go:141] libmachine: (ha-371738-m04) Calling .DriverName
	I0420 00:21:03.301721   99037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:03.301746   99037 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHHostname
	I0420 00:21:03.304608   99037 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:03.304975   99037 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:21:03.305012   99037 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:03.305153   99037 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHPort
	I0420 00:21:03.305302   99037 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHKeyPath
	I0420 00:21:03.305455   99037 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHUsername
	I0420 00:21:03.305588   99037 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m04/id_rsa Username:docker}
	I0420 00:21:03.387573   99037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:03.406096   99037 status.go:257] ha-371738-m04 status: &{Name:ha-371738-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-371738 -n ha-371738
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-371738 logs -n 25: (1.529586215s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-371738 cp ha-371738-m03:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3122242891/001/cp-test_ha-371738-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m03:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738:/home/docker/cp-test_ha-371738-m03_ha-371738.txt                       |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738 sudo cat                                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m03_ha-371738.txt                                 |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m03:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m02:/home/docker/cp-test_ha-371738-m03_ha-371738-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738-m02 sudo cat                                          | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m03_ha-371738-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m03:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04:/home/docker/cp-test_ha-371738-m03_ha-371738-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738-m04 sudo cat                                          | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m03_ha-371738-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-371738 cp testdata/cp-test.txt                                                | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3122242891/001/cp-test_ha-371738-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738:/home/docker/cp-test_ha-371738-m04_ha-371738.txt                       |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738 sudo cat                                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m04_ha-371738.txt                                 |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m02:/home/docker/cp-test_ha-371738-m04_ha-371738-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738-m02 sudo cat                                          | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m04_ha-371738-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m03:/home/docker/cp-test_ha-371738-m04_ha-371738-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738-m03 sudo cat                                          | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m04_ha-371738-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-371738 node stop m02 -v=7                                                     | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 00:14:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 00:14:10.236871   94171 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:14:10.237002   94171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:14:10.237012   94171 out.go:304] Setting ErrFile to fd 2...
	I0420 00:14:10.237017   94171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:14:10.237224   94171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:14:10.237860   94171 out.go:298] Setting JSON to false
	I0420 00:14:10.238805   94171 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":10597,"bootTime":1713561453,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 00:14:10.238866   94171 start.go:139] virtualization: kvm guest
	I0420 00:14:10.241171   94171 out.go:177] * [ha-371738] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 00:14:10.242724   94171 notify.go:220] Checking for updates...
	I0420 00:14:10.242772   94171 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 00:14:10.244171   94171 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 00:14:10.245616   94171 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 00:14:10.246951   94171 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:14:10.248202   94171 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 00:14:10.249410   94171 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 00:14:10.250695   94171 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 00:14:10.290457   94171 out.go:177] * Using the kvm2 driver based on user configuration
	I0420 00:14:10.291763   94171 start.go:297] selected driver: kvm2
	I0420 00:14:10.291777   94171 start.go:901] validating driver "kvm2" against <nil>
	I0420 00:14:10.291792   94171 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 00:14:10.292734   94171 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 00:14:10.292815   94171 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 00:14:10.307519   94171 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 00:14:10.307559   94171 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0420 00:14:10.307767   94171 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:14:10.307831   94171 cni.go:84] Creating CNI manager for ""
	I0420 00:14:10.307843   94171 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0420 00:14:10.307851   94171 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0420 00:14:10.307907   94171 start.go:340] cluster config:
	{Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni
FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I0420 00:14:10.308008   94171 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 00:14:10.309909   94171 out.go:177] * Starting "ha-371738" primary control-plane node in "ha-371738" cluster
	I0420 00:14:10.311299   94171 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:14:10.311327   94171 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0420 00:14:10.311335   94171 cache.go:56] Caching tarball of preloaded images
	I0420 00:14:10.311410   94171 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 00:14:10.311421   94171 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 00:14:10.311726   94171 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:14:10.311748   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json: {Name:mkbaaf47d21f09ecf6d9895217ef92a775501247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:14:10.311881   94171 start.go:360] acquireMachinesLock for ha-371738: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 00:14:10.311907   94171 start.go:364] duration metric: took 14.266µs to acquireMachinesLock for "ha-371738"
	I0420 00:14:10.311921   94171 start.go:93] Provisioning new machine with config: &{Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:defau
lt APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:14:10.311970   94171 start.go:125] createHost starting for "" (driver="kvm2")
	I0420 00:14:10.313590   94171 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0420 00:14:10.314213   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:14:10.314264   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:14:10.329153   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35983
	I0420 00:14:10.329627   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:14:10.330168   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:14:10.330203   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:14:10.330554   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:14:10.330794   94171 main.go:141] libmachine: (ha-371738) Calling .GetMachineName
	I0420 00:14:10.330955   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:14:10.331131   94171 start.go:159] libmachine.API.Create for "ha-371738" (driver="kvm2")
	I0420 00:14:10.331163   94171 client.go:168] LocalClient.Create starting
	I0420 00:14:10.331198   94171 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem
	I0420 00:14:10.331235   94171 main.go:141] libmachine: Decoding PEM data...
	I0420 00:14:10.331255   94171 main.go:141] libmachine: Parsing certificate...
	I0420 00:14:10.331319   94171 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem
	I0420 00:14:10.331344   94171 main.go:141] libmachine: Decoding PEM data...
	I0420 00:14:10.331358   94171 main.go:141] libmachine: Parsing certificate...
	I0420 00:14:10.331384   94171 main.go:141] libmachine: Running pre-create checks...
	I0420 00:14:10.331396   94171 main.go:141] libmachine: (ha-371738) Calling .PreCreateCheck
	I0420 00:14:10.331708   94171 main.go:141] libmachine: (ha-371738) Calling .GetConfigRaw
	I0420 00:14:10.332106   94171 main.go:141] libmachine: Creating machine...
	I0420 00:14:10.332123   94171 main.go:141] libmachine: (ha-371738) Calling .Create
	I0420 00:14:10.332255   94171 main.go:141] libmachine: (ha-371738) Creating KVM machine...
	I0420 00:14:10.333707   94171 main.go:141] libmachine: (ha-371738) DBG | found existing default KVM network
	I0420 00:14:10.334370   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:10.334229   94195 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015320}
	I0420 00:14:10.334396   94171 main.go:141] libmachine: (ha-371738) DBG | created network xml: 
	I0420 00:14:10.334411   94171 main.go:141] libmachine: (ha-371738) DBG | <network>
	I0420 00:14:10.334426   94171 main.go:141] libmachine: (ha-371738) DBG |   <name>mk-ha-371738</name>
	I0420 00:14:10.334488   94171 main.go:141] libmachine: (ha-371738) DBG |   <dns enable='no'/>
	I0420 00:14:10.334517   94171 main.go:141] libmachine: (ha-371738) DBG |   
	I0420 00:14:10.334531   94171 main.go:141] libmachine: (ha-371738) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0420 00:14:10.334543   94171 main.go:141] libmachine: (ha-371738) DBG |     <dhcp>
	I0420 00:14:10.334555   94171 main.go:141] libmachine: (ha-371738) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0420 00:14:10.334572   94171 main.go:141] libmachine: (ha-371738) DBG |     </dhcp>
	I0420 00:14:10.334582   94171 main.go:141] libmachine: (ha-371738) DBG |   </ip>
	I0420 00:14:10.334593   94171 main.go:141] libmachine: (ha-371738) DBG |   
	I0420 00:14:10.334603   94171 main.go:141] libmachine: (ha-371738) DBG | </network>
	I0420 00:14:10.334613   94171 main.go:141] libmachine: (ha-371738) DBG | 
	I0420 00:14:10.339367   94171 main.go:141] libmachine: (ha-371738) DBG | trying to create private KVM network mk-ha-371738 192.168.39.0/24...
	I0420 00:14:10.401514   94171 main.go:141] libmachine: (ha-371738) DBG | private KVM network mk-ha-371738 192.168.39.0/24 created
	I0420 00:14:10.401566   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:10.401466   94195 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:14:10.401584   94171 main.go:141] libmachine: (ha-371738) Setting up store path in /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738 ...
	I0420 00:14:10.401612   94171 main.go:141] libmachine: (ha-371738) Building disk image from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0420 00:14:10.401633   94171 main.go:141] libmachine: (ha-371738) Downloading /home/jenkins/minikube-integration/18703-76456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0420 00:14:10.637507   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:10.637346   94195 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa...
	I0420 00:14:10.807033   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:10.806897   94195 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/ha-371738.rawdisk...
	I0420 00:14:10.807072   94171 main.go:141] libmachine: (ha-371738) DBG | Writing magic tar header
	I0420 00:14:10.807082   94171 main.go:141] libmachine: (ha-371738) DBG | Writing SSH key tar header
	I0420 00:14:10.807090   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:10.807040   94195 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738 ...
	I0420 00:14:10.807241   94171 main.go:141] libmachine: (ha-371738) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738
	I0420 00:14:10.807277   94171 main.go:141] libmachine: (ha-371738) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines
	I0420 00:14:10.807298   94171 main.go:141] libmachine: (ha-371738) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738 (perms=drwx------)
	I0420 00:14:10.807332   94171 main.go:141] libmachine: (ha-371738) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines (perms=drwxr-xr-x)
	I0420 00:14:10.807349   94171 main.go:141] libmachine: (ha-371738) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube (perms=drwxr-xr-x)
	I0420 00:14:10.807361   94171 main.go:141] libmachine: (ha-371738) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:14:10.807377   94171 main.go:141] libmachine: (ha-371738) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456
	I0420 00:14:10.807387   94171 main.go:141] libmachine: (ha-371738) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456 (perms=drwxrwxr-x)
	I0420 00:14:10.807404   94171 main.go:141] libmachine: (ha-371738) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0420 00:14:10.807419   94171 main.go:141] libmachine: (ha-371738) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0420 00:14:10.807432   94171 main.go:141] libmachine: (ha-371738) Creating domain...
	I0420 00:14:10.807446   94171 main.go:141] libmachine: (ha-371738) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0420 00:14:10.807463   94171 main.go:141] libmachine: (ha-371738) DBG | Checking permissions on dir: /home/jenkins
	I0420 00:14:10.807482   94171 main.go:141] libmachine: (ha-371738) DBG | Checking permissions on dir: /home
	I0420 00:14:10.807499   94171 main.go:141] libmachine: (ha-371738) DBG | Skipping /home - not owner
	I0420 00:14:10.808598   94171 main.go:141] libmachine: (ha-371738) define libvirt domain using xml: 
	I0420 00:14:10.808621   94171 main.go:141] libmachine: (ha-371738) <domain type='kvm'>
	I0420 00:14:10.808627   94171 main.go:141] libmachine: (ha-371738)   <name>ha-371738</name>
	I0420 00:14:10.808632   94171 main.go:141] libmachine: (ha-371738)   <memory unit='MiB'>2200</memory>
	I0420 00:14:10.808637   94171 main.go:141] libmachine: (ha-371738)   <vcpu>2</vcpu>
	I0420 00:14:10.808641   94171 main.go:141] libmachine: (ha-371738)   <features>
	I0420 00:14:10.808646   94171 main.go:141] libmachine: (ha-371738)     <acpi/>
	I0420 00:14:10.808650   94171 main.go:141] libmachine: (ha-371738)     <apic/>
	I0420 00:14:10.808656   94171 main.go:141] libmachine: (ha-371738)     <pae/>
	I0420 00:14:10.808665   94171 main.go:141] libmachine: (ha-371738)     
	I0420 00:14:10.808678   94171 main.go:141] libmachine: (ha-371738)   </features>
	I0420 00:14:10.808685   94171 main.go:141] libmachine: (ha-371738)   <cpu mode='host-passthrough'>
	I0420 00:14:10.808698   94171 main.go:141] libmachine: (ha-371738)   
	I0420 00:14:10.808701   94171 main.go:141] libmachine: (ha-371738)   </cpu>
	I0420 00:14:10.808706   94171 main.go:141] libmachine: (ha-371738)   <os>
	I0420 00:14:10.808711   94171 main.go:141] libmachine: (ha-371738)     <type>hvm</type>
	I0420 00:14:10.808718   94171 main.go:141] libmachine: (ha-371738)     <boot dev='cdrom'/>
	I0420 00:14:10.808723   94171 main.go:141] libmachine: (ha-371738)     <boot dev='hd'/>
	I0420 00:14:10.808730   94171 main.go:141] libmachine: (ha-371738)     <bootmenu enable='no'/>
	I0420 00:14:10.808734   94171 main.go:141] libmachine: (ha-371738)   </os>
	I0420 00:14:10.808739   94171 main.go:141] libmachine: (ha-371738)   <devices>
	I0420 00:14:10.808748   94171 main.go:141] libmachine: (ha-371738)     <disk type='file' device='cdrom'>
	I0420 00:14:10.808764   94171 main.go:141] libmachine: (ha-371738)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/boot2docker.iso'/>
	I0420 00:14:10.808776   94171 main.go:141] libmachine: (ha-371738)       <target dev='hdc' bus='scsi'/>
	I0420 00:14:10.808785   94171 main.go:141] libmachine: (ha-371738)       <readonly/>
	I0420 00:14:10.808789   94171 main.go:141] libmachine: (ha-371738)     </disk>
	I0420 00:14:10.808798   94171 main.go:141] libmachine: (ha-371738)     <disk type='file' device='disk'>
	I0420 00:14:10.808803   94171 main.go:141] libmachine: (ha-371738)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0420 00:14:10.808828   94171 main.go:141] libmachine: (ha-371738)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/ha-371738.rawdisk'/>
	I0420 00:14:10.808852   94171 main.go:141] libmachine: (ha-371738)       <target dev='hda' bus='virtio'/>
	I0420 00:14:10.808876   94171 main.go:141] libmachine: (ha-371738)     </disk>
	I0420 00:14:10.808900   94171 main.go:141] libmachine: (ha-371738)     <interface type='network'>
	I0420 00:14:10.808907   94171 main.go:141] libmachine: (ha-371738)       <source network='mk-ha-371738'/>
	I0420 00:14:10.808915   94171 main.go:141] libmachine: (ha-371738)       <model type='virtio'/>
	I0420 00:14:10.808920   94171 main.go:141] libmachine: (ha-371738)     </interface>
	I0420 00:14:10.808927   94171 main.go:141] libmachine: (ha-371738)     <interface type='network'>
	I0420 00:14:10.808933   94171 main.go:141] libmachine: (ha-371738)       <source network='default'/>
	I0420 00:14:10.808940   94171 main.go:141] libmachine: (ha-371738)       <model type='virtio'/>
	I0420 00:14:10.808946   94171 main.go:141] libmachine: (ha-371738)     </interface>
	I0420 00:14:10.808953   94171 main.go:141] libmachine: (ha-371738)     <serial type='pty'>
	I0420 00:14:10.808959   94171 main.go:141] libmachine: (ha-371738)       <target port='0'/>
	I0420 00:14:10.808963   94171 main.go:141] libmachine: (ha-371738)     </serial>
	I0420 00:14:10.808971   94171 main.go:141] libmachine: (ha-371738)     <console type='pty'>
	I0420 00:14:10.808979   94171 main.go:141] libmachine: (ha-371738)       <target type='serial' port='0'/>
	I0420 00:14:10.808992   94171 main.go:141] libmachine: (ha-371738)     </console>
	I0420 00:14:10.809001   94171 main.go:141] libmachine: (ha-371738)     <rng model='virtio'>
	I0420 00:14:10.809006   94171 main.go:141] libmachine: (ha-371738)       <backend model='random'>/dev/random</backend>
	I0420 00:14:10.809014   94171 main.go:141] libmachine: (ha-371738)     </rng>
	I0420 00:14:10.809019   94171 main.go:141] libmachine: (ha-371738)     
	I0420 00:14:10.809024   94171 main.go:141] libmachine: (ha-371738)     
	I0420 00:14:10.809028   94171 main.go:141] libmachine: (ha-371738)   </devices>
	I0420 00:14:10.809037   94171 main.go:141] libmachine: (ha-371738) </domain>
	I0420 00:14:10.809041   94171 main.go:141] libmachine: (ha-371738) 
	I0420 00:14:10.813367   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:e3:54:1d in network default
	I0420 00:14:10.813989   94171 main.go:141] libmachine: (ha-371738) Ensuring networks are active...
	I0420 00:14:10.814016   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:10.814692   94171 main.go:141] libmachine: (ha-371738) Ensuring network default is active
	I0420 00:14:10.814953   94171 main.go:141] libmachine: (ha-371738) Ensuring network mk-ha-371738 is active
	I0420 00:14:10.815492   94171 main.go:141] libmachine: (ha-371738) Getting domain xml...
	I0420 00:14:10.816196   94171 main.go:141] libmachine: (ha-371738) Creating domain...
	I0420 00:14:11.986727   94171 main.go:141] libmachine: (ha-371738) Waiting to get IP...
	I0420 00:14:11.987631   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:11.988035   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:11.988065   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:11.988011   94195 retry.go:31] will retry after 281.596305ms: waiting for machine to come up
	I0420 00:14:12.271521   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:12.272054   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:12.272079   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:12.271994   94195 retry.go:31] will retry after 266.421398ms: waiting for machine to come up
	I0420 00:14:12.540481   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:12.540910   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:12.540933   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:12.540860   94195 retry.go:31] will retry after 468.333676ms: waiting for machine to come up
	I0420 00:14:13.010520   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:13.010954   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:13.010989   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:13.010899   94195 retry.go:31] will retry after 425.140611ms: waiting for machine to come up
	I0420 00:14:13.437327   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:13.437703   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:13.437726   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:13.437659   94195 retry.go:31] will retry after 690.263967ms: waiting for machine to come up
	I0420 00:14:14.129691   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:14.130084   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:14.130130   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:14.130049   94195 retry.go:31] will retry after 866.995514ms: waiting for machine to come up
	I0420 00:14:14.999183   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:14.999601   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:14.999634   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:14.999566   94195 retry.go:31] will retry after 1.051690522s: waiting for machine to come up
	I0420 00:14:16.052424   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:16.052882   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:16.052910   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:16.052841   94195 retry.go:31] will retry after 1.246619998s: waiting for machine to come up
	I0420 00:14:17.301213   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:17.301633   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:17.301658   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:17.301590   94195 retry.go:31] will retry after 1.149702229s: waiting for machine to come up
	I0420 00:14:18.452804   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:18.453380   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:18.453408   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:18.453301   94195 retry.go:31] will retry after 1.414395436s: waiting for machine to come up
	I0420 00:14:19.868875   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:19.869253   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:19.869282   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:19.869199   94195 retry.go:31] will retry after 1.780293534s: waiting for machine to come up
	I0420 00:14:21.650997   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:21.651558   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:21.651581   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:21.651520   94195 retry.go:31] will retry after 2.372257741s: waiting for machine to come up
	I0420 00:14:24.026971   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:24.027509   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:24.027536   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:24.027461   94195 retry.go:31] will retry after 4.453964445s: waiting for machine to come up
	I0420 00:14:28.485579   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:28.485921   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:28.485974   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:28.485879   94195 retry.go:31] will retry after 5.412436051s: waiting for machine to come up
	I0420 00:14:33.902220   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:33.902551   94171 main.go:141] libmachine: (ha-371738) Found IP for machine: 192.168.39.217
	I0420 00:14:33.902598   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has current primary IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:33.902620   94171 main.go:141] libmachine: (ha-371738) Reserving static IP address...
	I0420 00:14:33.902900   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find host DHCP lease matching {name: "ha-371738", mac: "52:54:00:a2:22:29", ip: "192.168.39.217"} in network mk-ha-371738
	I0420 00:14:33.973862   94171 main.go:141] libmachine: (ha-371738) DBG | Getting to WaitForSSH function...
	I0420 00:14:33.973890   94171 main.go:141] libmachine: (ha-371738) Reserved static IP address: 192.168.39.217
	I0420 00:14:33.973909   94171 main.go:141] libmachine: (ha-371738) Waiting for SSH to be available...
	I0420 00:14:33.976405   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:33.976724   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:33.976750   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:33.976865   94171 main.go:141] libmachine: (ha-371738) DBG | Using SSH client type: external
	I0420 00:14:33.976888   94171 main.go:141] libmachine: (ha-371738) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa (-rw-------)
	I0420 00:14:33.976923   94171 main.go:141] libmachine: (ha-371738) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 00:14:33.976950   94171 main.go:141] libmachine: (ha-371738) DBG | About to run SSH command:
	I0420 00:14:33.976999   94171 main.go:141] libmachine: (ha-371738) DBG | exit 0
	I0420 00:14:34.105632   94171 main.go:141] libmachine: (ha-371738) DBG | SSH cmd err, output: <nil>: 
	I0420 00:14:34.105941   94171 main.go:141] libmachine: (ha-371738) KVM machine creation complete!
	I0420 00:14:34.106307   94171 main.go:141] libmachine: (ha-371738) Calling .GetConfigRaw
	I0420 00:14:34.106895   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:14:34.107127   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:14:34.107347   94171 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0420 00:14:34.107364   94171 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:14:34.108798   94171 main.go:141] libmachine: Detecting operating system of created instance...
	I0420 00:14:34.108812   94171 main.go:141] libmachine: Waiting for SSH to be available...
	I0420 00:14:34.108818   94171 main.go:141] libmachine: Getting to WaitForSSH function...
	I0420 00:14:34.108824   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:34.111034   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.111469   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:34.111496   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.111612   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:34.111777   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.111966   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.112133   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:34.112316   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:14:34.112578   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:14:34.112594   94171 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0420 00:14:34.224894   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:14:34.224930   94171 main.go:141] libmachine: Detecting the provisioner...
	I0420 00:14:34.224941   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:34.227796   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.228170   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:34.228224   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.228436   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:34.228645   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.228805   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.228940   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:34.229109   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:14:34.229290   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:14:34.229304   94171 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0420 00:14:34.342472   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0420 00:14:34.342556   94171 main.go:141] libmachine: found compatible host: buildroot
	I0420 00:14:34.342570   94171 main.go:141] libmachine: Provisioning with buildroot...
	I0420 00:14:34.342585   94171 main.go:141] libmachine: (ha-371738) Calling .GetMachineName
	I0420 00:14:34.342856   94171 buildroot.go:166] provisioning hostname "ha-371738"
	I0420 00:14:34.342889   94171 main.go:141] libmachine: (ha-371738) Calling .GetMachineName
	I0420 00:14:34.343087   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:34.345346   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.345652   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:34.345680   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.345763   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:34.345930   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.346080   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.346222   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:34.346409   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:14:34.346575   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:14:34.346587   94171 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-371738 && echo "ha-371738" | sudo tee /etc/hostname
	I0420 00:14:34.473096   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-371738
	
	I0420 00:14:34.473139   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:34.476156   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.476606   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:34.476637   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.476805   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:34.476969   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.477081   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.477208   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:34.477399   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:14:34.477589   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:14:34.477616   94171 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-371738' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-371738/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-371738' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 00:14:34.600268   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:14:34.600306   94171 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 00:14:34.600333   94171 buildroot.go:174] setting up certificates
	I0420 00:14:34.600375   94171 provision.go:84] configureAuth start
	I0420 00:14:34.600395   94171 main.go:141] libmachine: (ha-371738) Calling .GetMachineName
	I0420 00:14:34.600736   94171 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:14:34.603374   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.603748   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:34.603776   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.603967   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:34.606595   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.606977   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:34.607009   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.607144   94171 provision.go:143] copyHostCerts
	I0420 00:14:34.607180   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:14:34.607213   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 00:14:34.607223   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:14:34.607287   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 00:14:34.607365   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:14:34.607384   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 00:14:34.607388   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:14:34.607411   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 00:14:34.607452   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:14:34.607470   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 00:14:34.607477   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:14:34.607496   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 00:14:34.607542   94171 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.ha-371738 san=[127.0.0.1 192.168.39.217 ha-371738 localhost minikube]
	I0420 00:14:34.685937   94171 provision.go:177] copyRemoteCerts
	I0420 00:14:34.685996   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 00:14:34.686023   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:34.688755   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.689087   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:34.689118   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.689290   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:34.689506   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.689669   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:34.689817   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:14:34.776784   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0420 00:14:34.776857   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0420 00:14:34.806052   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0420 00:14:34.806117   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 00:14:34.833608   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0420 00:14:34.833691   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 00:14:34.860865   94171 provision.go:87] duration metric: took 260.468299ms to configureAuth
	I0420 00:14:34.860896   94171 buildroot.go:189] setting minikube options for container-runtime
	I0420 00:14:34.861106   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:14:34.861271   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:34.863727   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.864022   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:34.864074   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.864231   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:34.864433   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.864644   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.864784   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:34.864998   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:14:34.865242   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:14:34.865267   94171 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 00:14:35.166112   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 00:14:35.166143   94171 main.go:141] libmachine: Checking connection to Docker...
	I0420 00:14:35.166169   94171 main.go:141] libmachine: (ha-371738) Calling .GetURL
	I0420 00:14:35.167408   94171 main.go:141] libmachine: (ha-371738) DBG | Using libvirt version 6000000
	I0420 00:14:35.169613   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.169882   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:35.169904   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.170118   94171 main.go:141] libmachine: Docker is up and running!
	I0420 00:14:35.170132   94171 main.go:141] libmachine: Reticulating splines...
	I0420 00:14:35.170142   94171 client.go:171] duration metric: took 24.838966937s to LocalClient.Create
	I0420 00:14:35.170170   94171 start.go:167] duration metric: took 24.839039485s to libmachine.API.Create "ha-371738"
	I0420 00:14:35.170182   94171 start.go:293] postStartSetup for "ha-371738" (driver="kvm2")
	I0420 00:14:35.170197   94171 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 00:14:35.170221   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:14:35.170482   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 00:14:35.170514   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:35.172733   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.173061   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:35.173092   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.173227   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:35.173443   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:35.173600   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:35.173782   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:14:35.260241   94171 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 00:14:35.265205   94171 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 00:14:35.265232   94171 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 00:14:35.265305   94171 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 00:14:35.265414   94171 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 00:14:35.265427   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /etc/ssl/certs/837422.pem
	I0420 00:14:35.265548   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 00:14:35.276008   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:14:35.306759   94171 start.go:296] duration metric: took 136.561279ms for postStartSetup
	I0420 00:14:35.306816   94171 main.go:141] libmachine: (ha-371738) Calling .GetConfigRaw
	I0420 00:14:35.307395   94171 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:14:35.310155   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.310544   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:35.310574   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.310807   94171 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:14:35.310998   94171 start.go:128] duration metric: took 24.999017816s to createHost
	I0420 00:14:35.311024   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:35.313335   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.313642   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:35.313666   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.313804   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:35.313980   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:35.314127   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:35.314270   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:35.314414   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:14:35.314587   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:14:35.314602   94171 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 00:14:35.426203   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713572075.396742702
	
	I0420 00:14:35.426227   94171 fix.go:216] guest clock: 1713572075.396742702
	I0420 00:14:35.426234   94171 fix.go:229] Guest: 2024-04-20 00:14:35.396742702 +0000 UTC Remote: 2024-04-20 00:14:35.311011787 +0000 UTC m=+25.122723442 (delta=85.730915ms)
	I0420 00:14:35.426268   94171 fix.go:200] guest clock delta is within tolerance: 85.730915ms
	I0420 00:14:35.426274   94171 start.go:83] releasing machines lock for "ha-371738", held for 25.114360814s
	I0420 00:14:35.426296   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:14:35.426621   94171 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:14:35.429284   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.429644   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:35.429670   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.429814   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:14:35.430474   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:14:35.430681   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:14:35.430745   94171 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 00:14:35.430803   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:35.430890   94171 ssh_runner.go:195] Run: cat /version.json
	I0420 00:14:35.430907   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:35.433289   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.433570   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:35.433606   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.433748   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:35.433755   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.433928   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:35.434093   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:35.434153   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:35.434180   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.434258   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:14:35.434369   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:35.434535   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:35.434711   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:35.434899   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:14:35.539476   94171 ssh_runner.go:195] Run: systemctl --version
	I0420 00:14:35.546100   94171 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 00:14:35.707988   94171 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 00:14:35.715140   94171 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 00:14:35.715242   94171 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 00:14:35.736498   94171 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 00:14:35.736530   94171 start.go:494] detecting cgroup driver to use...
	I0420 00:14:35.736603   94171 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 00:14:35.755787   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 00:14:35.772011   94171 docker.go:217] disabling cri-docker service (if available) ...
	I0420 00:14:35.772081   94171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 00:14:35.788311   94171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 00:14:35.803910   94171 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 00:14:35.928875   94171 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 00:14:36.077151   94171 docker.go:233] disabling docker service ...
	I0420 00:14:36.077228   94171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 00:14:36.093913   94171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 00:14:36.107308   94171 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 00:14:36.245768   94171 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 00:14:36.356723   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 00:14:36.371396   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 00:14:36.391651   94171 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 00:14:36.391722   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:14:36.402637   94171 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 00:14:36.402701   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:14:36.413751   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:14:36.424657   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:14:36.435450   94171 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 00:14:36.446564   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:14:36.457469   94171 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:14:36.476084   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:14:36.486545   94171 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 00:14:36.495981   94171 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 00:14:36.496036   94171 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 00:14:36.510160   94171 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 00:14:36.519893   94171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:14:36.635739   94171 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 00:14:36.789171   94171 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 00:14:36.789252   94171 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 00:14:36.796872   94171 start.go:562] Will wait 60s for crictl version
	I0420 00:14:36.796925   94171 ssh_runner.go:195] Run: which crictl
	I0420 00:14:36.801585   94171 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 00:14:36.837894   94171 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 00:14:36.837990   94171 ssh_runner.go:195] Run: crio --version
	I0420 00:14:36.869135   94171 ssh_runner.go:195] Run: crio --version
	I0420 00:14:36.904998   94171 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 00:14:36.906493   94171 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:14:36.909156   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:36.909578   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:36.909610   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:36.909813   94171 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0420 00:14:36.914426   94171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:14:36.928301   94171 kubeadm.go:877] updating cluster {Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:1
92.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 00:14:36.928403   94171 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:14:36.928447   94171 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:14:36.967553   94171 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 00:14:36.967691   94171 ssh_runner.go:195] Run: which lz4
	I0420 00:14:36.972305   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0420 00:14:36.972384   94171 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 00:14:36.976978   94171 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 00:14:36.977009   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 00:14:38.655166   94171 crio.go:462] duration metric: took 1.682799034s to copy over tarball
	I0420 00:14:38.655238   94171 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 00:14:41.019902   94171 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.36463309s)
	I0420 00:14:41.019937   94171 crio.go:469] duration metric: took 2.364739736s to extract the tarball
	I0420 00:14:41.019945   94171 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 00:14:41.059584   94171 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:14:41.111191   94171 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 00:14:41.111222   94171 cache_images.go:84] Images are preloaded, skipping loading
	I0420 00:14:41.111232   94171 kubeadm.go:928] updating node { 192.168.39.217 8443 v1.30.0 crio true true} ...
	I0420 00:14:41.111369   94171 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-371738 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 00:14:41.111435   94171 ssh_runner.go:195] Run: crio config
	I0420 00:14:41.165524   94171 cni.go:84] Creating CNI manager for ""
	I0420 00:14:41.165550   94171 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0420 00:14:41.165562   94171 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 00:14:41.165583   94171 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-371738 NodeName:ha-371738 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 00:14:41.165742   94171 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-371738"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 00:14:41.165767   94171 kube-vip.go:111] generating kube-vip config ...
	I0420 00:14:41.165808   94171 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0420 00:14:41.183420   94171 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0420 00:14:41.183562   94171 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0420 00:14:41.183644   94171 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 00:14:41.194986   94171 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 00:14:41.195057   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0420 00:14:41.206454   94171 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0420 00:14:41.225330   94171 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 00:14:41.244259   94171 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0420 00:14:41.263430   94171 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0420 00:14:41.283045   94171 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0420 00:14:41.287585   94171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:14:41.302261   94171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:14:41.426221   94171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:14:41.451386   94171 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738 for IP: 192.168.39.217
	I0420 00:14:41.451413   94171 certs.go:194] generating shared ca certs ...
	I0420 00:14:41.451436   94171 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:14:41.451588   94171 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 00:14:41.451630   94171 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 00:14:41.451648   94171 certs.go:256] generating profile certs ...
	I0420 00:14:41.451696   94171 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key
	I0420 00:14:41.451709   94171 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.crt with IP's: []
	I0420 00:14:41.558257   94171 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.crt ...
	I0420 00:14:41.558289   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.crt: {Name:mk37036e41ddddbb176e3a2220121f170aa3b61d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:14:41.558481   94171 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key ...
	I0420 00:14:41.558496   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key: {Name:mk4a286ff198053f6c6692e73c8407a1abbd3471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:14:41.558603   94171 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.1a7612e1
	I0420 00:14:41.558621   94171 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.1a7612e1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.254]
	I0420 00:14:41.683925   94171 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.1a7612e1 ...
	I0420 00:14:41.683954   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.1a7612e1: {Name:mkffd2bb9c98164ef687ab11af6ed48e5403c4b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:14:41.684149   94171 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.1a7612e1 ...
	I0420 00:14:41.684168   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.1a7612e1: {Name:mkf33c0579e0c29722688b1a37c41c0ea7e506dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:14:41.684263   94171 certs.go:381] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.1a7612e1 -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt
	I0420 00:14:41.684337   94171 certs.go:385] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.1a7612e1 -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key
	I0420 00:14:41.684396   94171 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key
	I0420 00:14:41.684413   94171 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt with IP's: []
	I0420 00:14:41.727747   94171 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt ...
	I0420 00:14:41.727776   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt: {Name:mk29033f64798e7acd5af0c56f6c48c6e244f1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:14:41.727972   94171 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key ...
	I0420 00:14:41.727991   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key: {Name:mk42c86039cd6e2255e65e9ba5d6ceb201c5e13e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:14:41.728098   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0420 00:14:41.728119   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0420 00:14:41.728129   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0420 00:14:41.728150   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0420 00:14:41.728163   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0420 00:14:41.728176   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0420 00:14:41.728188   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0420 00:14:41.728197   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0420 00:14:41.728243   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 00:14:41.728280   94171 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 00:14:41.728296   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 00:14:41.728322   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 00:14:41.728347   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 00:14:41.728368   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 00:14:41.728404   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:14:41.728428   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem -> /usr/share/ca-certificates/83742.pem
	I0420 00:14:41.728441   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /usr/share/ca-certificates/837422.pem
	I0420 00:14:41.728453   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:14:41.729061   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 00:14:41.765127   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 00:14:41.794438   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 00:14:41.822431   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 00:14:41.849785   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0420 00:14:41.877705   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0420 00:14:41.906946   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 00:14:41.936879   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 00:14:41.966624   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 00:14:41.997130   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 00:14:42.026360   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 00:14:42.056158   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 00:14:42.075714   94171 ssh_runner.go:195] Run: openssl version
	I0420 00:14:42.082311   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 00:14:42.094512   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 00:14:42.099778   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 00:14:42.099854   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 00:14:42.106407   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 00:14:42.118386   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 00:14:42.130132   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 00:14:42.134876   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 00:14:42.134933   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 00:14:42.141382   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 00:14:42.153942   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 00:14:42.166764   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:14:42.172358   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:14:42.172412   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:14:42.178712   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 00:14:42.190652   94171 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 00:14:42.195594   94171 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0420 00:14:42.195653   94171 kubeadm.go:391] StartCluster: {Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.
168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:14:42.195745   94171 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 00:14:42.195787   94171 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 00:14:42.248782   94171 cri.go:89] found id: ""
	I0420 00:14:42.248879   94171 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0420 00:14:42.261013   94171 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 00:14:42.276919   94171 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 00:14:42.291045   94171 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 00:14:42.291074   94171 kubeadm.go:156] found existing configuration files:
	
	I0420 00:14:42.291123   94171 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 00:14:42.308551   94171 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 00:14:42.308611   94171 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 00:14:42.322614   94171 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 00:14:42.334712   94171 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 00:14:42.334796   94171 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 00:14:42.347360   94171 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 00:14:42.359153   94171 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 00:14:42.359223   94171 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 00:14:42.370898   94171 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 00:14:42.382391   94171 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 00:14:42.382455   94171 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 00:14:42.394376   94171 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 00:14:42.644913   94171 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 00:14:56.659091   94171 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 00:14:56.659180   94171 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 00:14:56.659277   94171 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 00:14:56.659379   94171 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 00:14:56.659489   94171 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 00:14:56.659576   94171 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 00:14:56.661227   94171 out.go:204]   - Generating certificates and keys ...
	I0420 00:14:56.661334   94171 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 00:14:56.661412   94171 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 00:14:56.661475   94171 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0420 00:14:56.661524   94171 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0420 00:14:56.661574   94171 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0420 00:14:56.661644   94171 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0420 00:14:56.661724   94171 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0420 00:14:56.661852   94171 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-371738 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0420 00:14:56.661941   94171 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0420 00:14:56.662070   94171 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-371738 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0420 00:14:56.662199   94171 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0420 00:14:56.662299   94171 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0420 00:14:56.662361   94171 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0420 00:14:56.662436   94171 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 00:14:56.662501   94171 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 00:14:56.662579   94171 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 00:14:56.662654   94171 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 00:14:56.662717   94171 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 00:14:56.662798   94171 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 00:14:56.662902   94171 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 00:14:56.662986   94171 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 00:14:56.664457   94171 out.go:204]   - Booting up control plane ...
	I0420 00:14:56.664557   94171 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 00:14:56.664643   94171 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 00:14:56.664723   94171 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 00:14:56.664836   94171 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 00:14:56.664955   94171 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 00:14:56.665016   94171 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 00:14:56.665166   94171 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 00:14:56.665230   94171 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 00:14:56.665282   94171 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.15633ms
	I0420 00:14:56.665382   94171 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 00:14:56.665477   94171 kubeadm.go:309] [api-check] The API server is healthy after 9.040398858s
	I0420 00:14:56.665596   94171 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 00:14:56.665697   94171 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 00:14:56.665784   94171 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 00:14:56.665933   94171 kubeadm.go:309] [mark-control-plane] Marking the node ha-371738 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 00:14:56.666019   94171 kubeadm.go:309] [bootstrap-token] Using token: 7d4v3p.d8unl1jztmptssyo
	I0420 00:14:56.667261   94171 out.go:204]   - Configuring RBAC rules ...
	I0420 00:14:56.667354   94171 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 00:14:56.667463   94171 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 00:14:56.667585   94171 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 00:14:56.667698   94171 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 00:14:56.667833   94171 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 00:14:56.667952   94171 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 00:14:56.668086   94171 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 00:14:56.668144   94171 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 00:14:56.668212   94171 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 00:14:56.668222   94171 kubeadm.go:309] 
	I0420 00:14:56.668306   94171 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 00:14:56.668317   94171 kubeadm.go:309] 
	I0420 00:14:56.668378   94171 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 00:14:56.668384   94171 kubeadm.go:309] 
	I0420 00:14:56.668429   94171 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 00:14:56.668478   94171 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 00:14:56.668524   94171 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 00:14:56.668530   94171 kubeadm.go:309] 
	I0420 00:14:56.668611   94171 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 00:14:56.668621   94171 kubeadm.go:309] 
	I0420 00:14:56.668685   94171 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 00:14:56.668695   94171 kubeadm.go:309] 
	I0420 00:14:56.668765   94171 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 00:14:56.668865   94171 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 00:14:56.668972   94171 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 00:14:56.668984   94171 kubeadm.go:309] 
	I0420 00:14:56.669100   94171 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 00:14:56.669172   94171 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 00:14:56.669184   94171 kubeadm.go:309] 
	I0420 00:14:56.669295   94171 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 7d4v3p.d8unl1jztmptssyo \
	I0420 00:14:56.669427   94171 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 00:14:56.669468   94171 kubeadm.go:309] 	--control-plane 
	I0420 00:14:56.669483   94171 kubeadm.go:309] 
	I0420 00:14:56.669591   94171 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 00:14:56.669600   94171 kubeadm.go:309] 
	I0420 00:14:56.669713   94171 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 7d4v3p.d8unl1jztmptssyo \
	I0420 00:14:56.669817   94171 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 00:14:56.669838   94171 cni.go:84] Creating CNI manager for ""
	I0420 00:14:56.669846   94171 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0420 00:14:56.671350   94171 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0420 00:14:56.672615   94171 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0420 00:14:56.678620   94171 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0420 00:14:56.678638   94171 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0420 00:14:56.697166   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0420 00:14:57.036496   94171 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 00:14:57.036606   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:14:57.036621   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-371738 minikube.k8s.io/updated_at=2024_04_20T00_14_57_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=ha-371738 minikube.k8s.io/primary=true
	I0420 00:14:57.055480   94171 ops.go:34] apiserver oom_adj: -16
	I0420 00:14:57.278503   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:14:57.779520   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:14:58.279319   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:14:58.778705   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:14:59.278539   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:14:59.779557   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:00.279323   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:00.778642   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:01.279519   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:01.778739   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:02.278673   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:02.779134   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:03.279576   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:03.778637   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:04.279543   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:04.779335   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:05.279229   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:05.778563   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:06.279216   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:06.779166   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:06.891209   94171 kubeadm.go:1107] duration metric: took 9.854679932s to wait for elevateKubeSystemPrivileges
	W0420 00:15:06.891255   94171 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 00:15:06.891267   94171 kubeadm.go:393] duration metric: took 24.695616998s to StartCluster
	I0420 00:15:06.891290   94171 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:15:06.891378   94171 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 00:15:06.892094   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:15:06.892339   94171 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:15:06.892370   94171 start.go:240] waiting for startup goroutines ...
	I0420 00:15:06.892352   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0420 00:15:06.892364   94171 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 00:15:06.892428   94171 addons.go:69] Setting storage-provisioner=true in profile "ha-371738"
	I0420 00:15:06.892465   94171 addons.go:69] Setting default-storageclass=true in profile "ha-371738"
	I0420 00:15:06.892478   94171 addons.go:234] Setting addon storage-provisioner=true in "ha-371738"
	I0420 00:15:06.892516   94171 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:15:06.892523   94171 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-371738"
	I0420 00:15:06.892554   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:15:06.892891   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:06.892936   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:06.893002   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:06.893041   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:06.908100   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I0420 00:15:06.908133   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33961
	I0420 00:15:06.908589   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:06.908639   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:06.909103   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:06.909124   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:06.909104   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:06.909177   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:06.909501   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:06.909534   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:06.909690   94171 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:15:06.910074   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:06.910107   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:06.911961   94171 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 00:15:06.912378   94171 kapi.go:59] client config for ha-371738: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.crt", KeyFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key", CAFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0420 00:15:06.913054   94171 cert_rotation.go:137] Starting client certificate rotation controller
	I0420 00:15:06.913382   94171 addons.go:234] Setting addon default-storageclass=true in "ha-371738"
	I0420 00:15:06.913430   94171 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:15:06.913807   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:06.913844   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:06.925232   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36041
	I0420 00:15:06.925655   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:06.926207   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:06.926237   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:06.926561   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:06.926788   94171 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:15:06.928162   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41083
	I0420 00:15:06.928490   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:15:06.928631   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:06.930698   94171 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 00:15:06.929066   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:06.930741   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:06.931033   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:06.932320   94171 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 00:15:06.932337   94171 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 00:15:06.932355   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:15:06.932918   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:06.932958   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:06.935247   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:15:06.935715   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:15:06.935744   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:15:06.935865   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:15:06.936139   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:15:06.936317   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:15:06.936488   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:15:06.948490   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33721
	I0420 00:15:06.948957   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:06.949472   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:06.949498   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:06.949825   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:06.950017   94171 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:15:06.951437   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:15:06.951723   94171 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 00:15:06.951743   94171 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 00:15:06.951761   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:15:06.955093   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:15:06.955543   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:15:06.955571   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:15:06.955726   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:15:06.955923   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:15:06.956054   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:15:06.956224   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:15:07.028723   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0420 00:15:07.146817   94171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 00:15:07.180824   94171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 00:15:07.523477   94171 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0420 00:15:07.786608   94171 main.go:141] libmachine: Making call to close driver server
	I0420 00:15:07.786638   94171 main.go:141] libmachine: Making call to close driver server
	I0420 00:15:07.786658   94171 main.go:141] libmachine: (ha-371738) Calling .Close
	I0420 00:15:07.786645   94171 main.go:141] libmachine: (ha-371738) Calling .Close
	I0420 00:15:07.787116   94171 main.go:141] libmachine: (ha-371738) DBG | Closing plugin on server side
	I0420 00:15:07.787125   94171 main.go:141] libmachine: (ha-371738) DBG | Closing plugin on server side
	I0420 00:15:07.787122   94171 main.go:141] libmachine: Successfully made call to close driver server
	I0420 00:15:07.787144   94171 main.go:141] libmachine: Successfully made call to close driver server
	I0420 00:15:07.787160   94171 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 00:15:07.787175   94171 main.go:141] libmachine: Making call to close driver server
	I0420 00:15:07.787147   94171 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 00:15:07.787198   94171 main.go:141] libmachine: (ha-371738) Calling .Close
	I0420 00:15:07.787279   94171 main.go:141] libmachine: Making call to close driver server
	I0420 00:15:07.787318   94171 main.go:141] libmachine: (ha-371738) Calling .Close
	I0420 00:15:07.787420   94171 main.go:141] libmachine: Successfully made call to close driver server
	I0420 00:15:07.787436   94171 main.go:141] libmachine: (ha-371738) DBG | Closing plugin on server side
	I0420 00:15:07.787447   94171 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 00:15:07.787610   94171 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0420 00:15:07.787630   94171 round_trippers.go:469] Request Headers:
	I0420 00:15:07.787641   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:15:07.787656   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:15:07.787683   94171 main.go:141] libmachine: Successfully made call to close driver server
	I0420 00:15:07.787696   94171 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 00:15:07.805769   94171 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0420 00:15:07.806728   94171 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0420 00:15:07.806749   94171 round_trippers.go:469] Request Headers:
	I0420 00:15:07.806761   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:15:07.806769   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:15:07.806780   94171 round_trippers.go:473]     Content-Type: application/json
	I0420 00:15:07.813984   94171 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0420 00:15:07.814146   94171 main.go:141] libmachine: Making call to close driver server
	I0420 00:15:07.814162   94171 main.go:141] libmachine: (ha-371738) Calling .Close
	I0420 00:15:07.814465   94171 main.go:141] libmachine: Successfully made call to close driver server
	I0420 00:15:07.814486   94171 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 00:15:07.814488   94171 main.go:141] libmachine: (ha-371738) DBG | Closing plugin on server side
	I0420 00:15:07.816664   94171 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0420 00:15:07.817831   94171 addons.go:505] duration metric: took 925.463435ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0420 00:15:07.817867   94171 start.go:245] waiting for cluster config update ...
	I0420 00:15:07.817878   94171 start.go:254] writing updated cluster config ...
	I0420 00:15:07.819351   94171 out.go:177] 
	I0420 00:15:07.820931   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:15:07.820997   94171 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:15:07.822681   94171 out.go:177] * Starting "ha-371738-m02" control-plane node in "ha-371738" cluster
	I0420 00:15:07.823905   94171 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:15:07.823927   94171 cache.go:56] Caching tarball of preloaded images
	I0420 00:15:07.824002   94171 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 00:15:07.824010   94171 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 00:15:07.824103   94171 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:15:07.824306   94171 start.go:360] acquireMachinesLock for ha-371738-m02: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 00:15:07.824372   94171 start.go:364] duration metric: took 38.785µs to acquireMachinesLock for "ha-371738-m02"
	I0420 00:15:07.824396   94171 start.go:93] Provisioning new machine with config: &{Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:defau
lt APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/j
enkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:15:07.824561   94171 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0420 00:15:07.826178   94171 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0420 00:15:07.826258   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:07.826281   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:07.840880   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I0420 00:15:07.841345   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:07.841881   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:07.841906   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:07.842229   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:07.842434   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetMachineName
	I0420 00:15:07.842585   94171 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:15:07.842757   94171 start.go:159] libmachine.API.Create for "ha-371738" (driver="kvm2")
	I0420 00:15:07.842781   94171 client.go:168] LocalClient.Create starting
	I0420 00:15:07.842815   94171 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem
	I0420 00:15:07.842858   94171 main.go:141] libmachine: Decoding PEM data...
	I0420 00:15:07.842879   94171 main.go:141] libmachine: Parsing certificate...
	I0420 00:15:07.842964   94171 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem
	I0420 00:15:07.842992   94171 main.go:141] libmachine: Decoding PEM data...
	I0420 00:15:07.843013   94171 main.go:141] libmachine: Parsing certificate...
	I0420 00:15:07.843038   94171 main.go:141] libmachine: Running pre-create checks...
	I0420 00:15:07.843048   94171 main.go:141] libmachine: (ha-371738-m02) Calling .PreCreateCheck
	I0420 00:15:07.843203   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetConfigRaw
	I0420 00:15:07.843717   94171 main.go:141] libmachine: Creating machine...
	I0420 00:15:07.843731   94171 main.go:141] libmachine: (ha-371738-m02) Calling .Create
	I0420 00:15:07.843870   94171 main.go:141] libmachine: (ha-371738-m02) Creating KVM machine...
	I0420 00:15:07.845062   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found existing default KVM network
	I0420 00:15:07.845209   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found existing private KVM network mk-ha-371738
	I0420 00:15:07.845364   94171 main.go:141] libmachine: (ha-371738-m02) Setting up store path in /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02 ...
	I0420 00:15:07.845386   94171 main.go:141] libmachine: (ha-371738-m02) Building disk image from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0420 00:15:07.845434   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:07.845334   94574 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:15:07.845521   94171 main.go:141] libmachine: (ha-371738-m02) Downloading /home/jenkins/minikube-integration/18703-76456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0420 00:15:08.075190   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:08.075057   94574 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa...
	I0420 00:15:08.268872   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:08.268746   94574 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/ha-371738-m02.rawdisk...
	I0420 00:15:08.268918   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Writing magic tar header
	I0420 00:15:08.268933   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Writing SSH key tar header
	I0420 00:15:08.268954   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:08.268860   94574 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02 ...
	I0420 00:15:08.268971   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02
	I0420 00:15:08.268996   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines
	I0420 00:15:08.269013   94171 main.go:141] libmachine: (ha-371738-m02) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02 (perms=drwx------)
	I0420 00:15:08.269035   94171 main.go:141] libmachine: (ha-371738-m02) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines (perms=drwxr-xr-x)
	I0420 00:15:08.269050   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:15:08.269062   94171 main.go:141] libmachine: (ha-371738-m02) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube (perms=drwxr-xr-x)
	I0420 00:15:08.269073   94171 main.go:141] libmachine: (ha-371738-m02) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456 (perms=drwxrwxr-x)
	I0420 00:15:08.269079   94171 main.go:141] libmachine: (ha-371738-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0420 00:15:08.269086   94171 main.go:141] libmachine: (ha-371738-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0420 00:15:08.269091   94171 main.go:141] libmachine: (ha-371738-m02) Creating domain...
	I0420 00:15:08.269103   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456
	I0420 00:15:08.269109   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0420 00:15:08.269141   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Checking permissions on dir: /home/jenkins
	I0420 00:15:08.269163   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Checking permissions on dir: /home
	I0420 00:15:08.269208   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Skipping /home - not owner
	I0420 00:15:08.269996   94171 main.go:141] libmachine: (ha-371738-m02) define libvirt domain using xml: 
	I0420 00:15:08.270012   94171 main.go:141] libmachine: (ha-371738-m02) <domain type='kvm'>
	I0420 00:15:08.270018   94171 main.go:141] libmachine: (ha-371738-m02)   <name>ha-371738-m02</name>
	I0420 00:15:08.270023   94171 main.go:141] libmachine: (ha-371738-m02)   <memory unit='MiB'>2200</memory>
	I0420 00:15:08.270028   94171 main.go:141] libmachine: (ha-371738-m02)   <vcpu>2</vcpu>
	I0420 00:15:08.270033   94171 main.go:141] libmachine: (ha-371738-m02)   <features>
	I0420 00:15:08.270041   94171 main.go:141] libmachine: (ha-371738-m02)     <acpi/>
	I0420 00:15:08.270047   94171 main.go:141] libmachine: (ha-371738-m02)     <apic/>
	I0420 00:15:08.270060   94171 main.go:141] libmachine: (ha-371738-m02)     <pae/>
	I0420 00:15:08.270070   94171 main.go:141] libmachine: (ha-371738-m02)     
	I0420 00:15:08.270081   94171 main.go:141] libmachine: (ha-371738-m02)   </features>
	I0420 00:15:08.270087   94171 main.go:141] libmachine: (ha-371738-m02)   <cpu mode='host-passthrough'>
	I0420 00:15:08.270094   94171 main.go:141] libmachine: (ha-371738-m02)   
	I0420 00:15:08.270102   94171 main.go:141] libmachine: (ha-371738-m02)   </cpu>
	I0420 00:15:08.270110   94171 main.go:141] libmachine: (ha-371738-m02)   <os>
	I0420 00:15:08.270121   94171 main.go:141] libmachine: (ha-371738-m02)     <type>hvm</type>
	I0420 00:15:08.270132   94171 main.go:141] libmachine: (ha-371738-m02)     <boot dev='cdrom'/>
	I0420 00:15:08.270142   94171 main.go:141] libmachine: (ha-371738-m02)     <boot dev='hd'/>
	I0420 00:15:08.270154   94171 main.go:141] libmachine: (ha-371738-m02)     <bootmenu enable='no'/>
	I0420 00:15:08.270161   94171 main.go:141] libmachine: (ha-371738-m02)   </os>
	I0420 00:15:08.270172   94171 main.go:141] libmachine: (ha-371738-m02)   <devices>
	I0420 00:15:08.270187   94171 main.go:141] libmachine: (ha-371738-m02)     <disk type='file' device='cdrom'>
	I0420 00:15:08.270203   94171 main.go:141] libmachine: (ha-371738-m02)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/boot2docker.iso'/>
	I0420 00:15:08.270213   94171 main.go:141] libmachine: (ha-371738-m02)       <target dev='hdc' bus='scsi'/>
	I0420 00:15:08.270240   94171 main.go:141] libmachine: (ha-371738-m02)       <readonly/>
	I0420 00:15:08.270250   94171 main.go:141] libmachine: (ha-371738-m02)     </disk>
	I0420 00:15:08.270289   94171 main.go:141] libmachine: (ha-371738-m02)     <disk type='file' device='disk'>
	I0420 00:15:08.270321   94171 main.go:141] libmachine: (ha-371738-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0420 00:15:08.270352   94171 main.go:141] libmachine: (ha-371738-m02)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/ha-371738-m02.rawdisk'/>
	I0420 00:15:08.270447   94171 main.go:141] libmachine: (ha-371738-m02)       <target dev='hda' bus='virtio'/>
	I0420 00:15:08.270468   94171 main.go:141] libmachine: (ha-371738-m02)     </disk>
	I0420 00:15:08.270474   94171 main.go:141] libmachine: (ha-371738-m02)     <interface type='network'>
	I0420 00:15:08.270484   94171 main.go:141] libmachine: (ha-371738-m02)       <source network='mk-ha-371738'/>
	I0420 00:15:08.270492   94171 main.go:141] libmachine: (ha-371738-m02)       <model type='virtio'/>
	I0420 00:15:08.270498   94171 main.go:141] libmachine: (ha-371738-m02)     </interface>
	I0420 00:15:08.270505   94171 main.go:141] libmachine: (ha-371738-m02)     <interface type='network'>
	I0420 00:15:08.270523   94171 main.go:141] libmachine: (ha-371738-m02)       <source network='default'/>
	I0420 00:15:08.270538   94171 main.go:141] libmachine: (ha-371738-m02)       <model type='virtio'/>
	I0420 00:15:08.270551   94171 main.go:141] libmachine: (ha-371738-m02)     </interface>
	I0420 00:15:08.270566   94171 main.go:141] libmachine: (ha-371738-m02)     <serial type='pty'>
	I0420 00:15:08.270578   94171 main.go:141] libmachine: (ha-371738-m02)       <target port='0'/>
	I0420 00:15:08.270588   94171 main.go:141] libmachine: (ha-371738-m02)     </serial>
	I0420 00:15:08.270600   94171 main.go:141] libmachine: (ha-371738-m02)     <console type='pty'>
	I0420 00:15:08.270612   94171 main.go:141] libmachine: (ha-371738-m02)       <target type='serial' port='0'/>
	I0420 00:15:08.270623   94171 main.go:141] libmachine: (ha-371738-m02)     </console>
	I0420 00:15:08.270640   94171 main.go:141] libmachine: (ha-371738-m02)     <rng model='virtio'>
	I0420 00:15:08.270654   94171 main.go:141] libmachine: (ha-371738-m02)       <backend model='random'>/dev/random</backend>
	I0420 00:15:08.270663   94171 main.go:141] libmachine: (ha-371738-m02)     </rng>
	I0420 00:15:08.270671   94171 main.go:141] libmachine: (ha-371738-m02)     
	I0420 00:15:08.270680   94171 main.go:141] libmachine: (ha-371738-m02)     
	I0420 00:15:08.270692   94171 main.go:141] libmachine: (ha-371738-m02)   </devices>
	I0420 00:15:08.270702   94171 main.go:141] libmachine: (ha-371738-m02) </domain>
	I0420 00:15:08.270712   94171 main.go:141] libmachine: (ha-371738-m02) 
	I0420 00:15:08.277278   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:5e:5f:4d in network default
	I0420 00:15:08.277899   94171 main.go:141] libmachine: (ha-371738-m02) Ensuring networks are active...
	I0420 00:15:08.277922   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:08.278576   94171 main.go:141] libmachine: (ha-371738-m02) Ensuring network default is active
	I0420 00:15:08.278949   94171 main.go:141] libmachine: (ha-371738-m02) Ensuring network mk-ha-371738 is active
	I0420 00:15:08.279295   94171 main.go:141] libmachine: (ha-371738-m02) Getting domain xml...
	I0420 00:15:08.280066   94171 main.go:141] libmachine: (ha-371738-m02) Creating domain...
	I0420 00:15:09.479657   94171 main.go:141] libmachine: (ha-371738-m02) Waiting to get IP...
	I0420 00:15:09.480611   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:09.480992   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:09.481045   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:09.480994   94574 retry.go:31] will retry after 304.170036ms: waiting for machine to come up
	I0420 00:15:09.786359   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:09.786916   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:09.786948   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:09.786882   94574 retry.go:31] will retry after 243.704709ms: waiting for machine to come up
	I0420 00:15:10.032349   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:10.032828   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:10.032863   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:10.032767   94574 retry.go:31] will retry after 376.540423ms: waiting for machine to come up
	I0420 00:15:10.411306   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:10.411841   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:10.411865   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:10.411775   94574 retry.go:31] will retry after 487.578156ms: waiting for machine to come up
	I0420 00:15:10.901455   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:10.901951   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:10.901986   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:10.901924   94574 retry.go:31] will retry after 589.95165ms: waiting for machine to come up
	I0420 00:15:11.493802   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:11.494275   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:11.494343   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:11.494230   94574 retry.go:31] will retry after 645.321602ms: waiting for machine to come up
	I0420 00:15:12.140990   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:12.141406   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:12.141434   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:12.141358   94574 retry.go:31] will retry after 757.810418ms: waiting for machine to come up
	I0420 00:15:12.901051   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:12.901506   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:12.901534   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:12.901460   94574 retry.go:31] will retry after 1.170896015s: waiting for machine to come up
	I0420 00:15:14.073666   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:14.074068   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:14.074097   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:14.074007   94574 retry.go:31] will retry after 1.501764207s: waiting for machine to come up
	I0420 00:15:15.577571   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:15.577934   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:15.577990   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:15.577901   94574 retry.go:31] will retry after 2.27309831s: waiting for machine to come up
	I0420 00:15:17.852548   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:17.853040   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:17.853068   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:17.852996   94574 retry.go:31] will retry after 2.900030711s: waiting for machine to come up
	I0420 00:15:20.754252   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:20.754731   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:20.754766   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:20.754651   94574 retry.go:31] will retry after 2.698308641s: waiting for machine to come up
	I0420 00:15:23.454454   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:23.454855   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:23.454884   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:23.454804   94574 retry.go:31] will retry after 4.201613554s: waiting for machine to come up
	I0420 00:15:27.658762   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:27.659200   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:27.659228   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:27.659137   94574 retry.go:31] will retry after 4.466090921s: waiting for machine to come up
	I0420 00:15:32.127839   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.128261   94171 main.go:141] libmachine: (ha-371738-m02) Found IP for machine: 192.168.39.48
	I0420 00:15:32.128288   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has current primary IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.128298   94171 main.go:141] libmachine: (ha-371738-m02) Reserving static IP address...
	I0420 00:15:32.128650   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find host DHCP lease matching {name: "ha-371738-m02", mac: "52:54:00:3b:ab:c8", ip: "192.168.39.48"} in network mk-ha-371738
	I0420 00:15:32.199433   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Getting to WaitForSSH function...
	I0420 00:15:32.199464   94171 main.go:141] libmachine: (ha-371738-m02) Reserved static IP address: 192.168.39.48
	I0420 00:15:32.199477   94171 main.go:141] libmachine: (ha-371738-m02) Waiting for SSH to be available...
	I0420 00:15:32.202838   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.203265   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:32.203302   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.203426   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Using SSH client type: external
	I0420 00:15:32.203459   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa (-rw-------)
	I0420 00:15:32.203496   94171 main.go:141] libmachine: (ha-371738-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 00:15:32.203510   94171 main.go:141] libmachine: (ha-371738-m02) DBG | About to run SSH command:
	I0420 00:15:32.203523   94171 main.go:141] libmachine: (ha-371738-m02) DBG | exit 0
	I0420 00:15:32.325485   94171 main.go:141] libmachine: (ha-371738-m02) DBG | SSH cmd err, output: <nil>: 
	I0420 00:15:32.325786   94171 main.go:141] libmachine: (ha-371738-m02) KVM machine creation complete!
	I0420 00:15:32.326127   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetConfigRaw
	I0420 00:15:32.326719   94171 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:15:32.326936   94171 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:15:32.327114   94171 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0420 00:15:32.327130   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetState
	I0420 00:15:32.328417   94171 main.go:141] libmachine: Detecting operating system of created instance...
	I0420 00:15:32.328442   94171 main.go:141] libmachine: Waiting for SSH to be available...
	I0420 00:15:32.328448   94171 main.go:141] libmachine: Getting to WaitForSSH function...
	I0420 00:15:32.328454   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:32.330848   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.331211   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:32.331252   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.331396   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:32.331597   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:32.331772   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:32.331912   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:32.332053   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:15:32.332323   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0420 00:15:32.332340   94171 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0420 00:15:32.432750   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:15:32.432772   94171 main.go:141] libmachine: Detecting the provisioner...
	I0420 00:15:32.432779   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:32.435496   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.435911   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:32.435946   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.436049   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:32.436262   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:32.436447   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:32.436646   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:32.436816   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:15:32.436988   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0420 00:15:32.437002   94171 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0420 00:15:32.542787   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0420 00:15:32.542882   94171 main.go:141] libmachine: found compatible host: buildroot
	I0420 00:15:32.542892   94171 main.go:141] libmachine: Provisioning with buildroot...
	I0420 00:15:32.542899   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetMachineName
	I0420 00:15:32.543184   94171 buildroot.go:166] provisioning hostname "ha-371738-m02"
	I0420 00:15:32.543208   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetMachineName
	I0420 00:15:32.543401   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:32.546001   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.546491   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:32.546521   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.546684   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:32.546908   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:32.547089   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:32.547265   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:32.547484   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:15:32.547679   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0420 00:15:32.547690   94171 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-371738-m02 && echo "ha-371738-m02" | sudo tee /etc/hostname
	I0420 00:15:32.665216   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-371738-m02
	
	I0420 00:15:32.665240   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:32.668086   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.668484   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:32.668515   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.668702   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:32.668898   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:32.669060   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:32.669195   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:32.669390   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:15:32.669633   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0420 00:15:32.669658   94171 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-371738-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-371738-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-371738-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 00:15:32.791413   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:15:32.791448   94171 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 00:15:32.791466   94171 buildroot.go:174] setting up certificates
	I0420 00:15:32.791477   94171 provision.go:84] configureAuth start
	I0420 00:15:32.791485   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetMachineName
	I0420 00:15:32.791823   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetIP
	I0420 00:15:32.794666   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.795068   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:32.795097   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.795249   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:32.797673   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.798026   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:32.798051   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.798196   94171 provision.go:143] copyHostCerts
	I0420 00:15:32.798220   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:15:32.798247   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 00:15:32.798256   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:15:32.798315   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 00:15:32.798420   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:15:32.798440   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 00:15:32.798447   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:15:32.798476   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 00:15:32.798524   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:15:32.798546   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 00:15:32.798552   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:15:32.798572   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 00:15:32.798613   94171 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.ha-371738-m02 san=[127.0.0.1 192.168.39.48 ha-371738-m02 localhost minikube]
	I0420 00:15:33.245269   94171 provision.go:177] copyRemoteCerts
	I0420 00:15:33.245363   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 00:15:33.245388   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:33.248117   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.248513   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.248538   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.248681   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:33.248922   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:33.249107   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:33.249263   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa Username:docker}
	I0420 00:15:33.334547   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0420 00:15:33.334619   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 00:15:33.361714   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0420 00:15:33.361762   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0420 00:15:33.387454   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0420 00:15:33.387511   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 00:15:33.412605   94171 provision.go:87] duration metric: took 621.113895ms to configureAuth
	I0420 00:15:33.412636   94171 buildroot.go:189] setting minikube options for container-runtime
	I0420 00:15:33.412855   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:15:33.412944   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:33.415597   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.415879   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.415906   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.415998   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:33.416216   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:33.416384   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:33.416484   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:33.416670   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:15:33.416848   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0420 00:15:33.416869   94171 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 00:15:33.688228   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 00:15:33.688256   94171 main.go:141] libmachine: Checking connection to Docker...
	I0420 00:15:33.688266   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetURL
	I0420 00:15:33.689677   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Using libvirt version 6000000
	I0420 00:15:33.691545   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.691849   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.691877   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.692069   94171 main.go:141] libmachine: Docker is up and running!
	I0420 00:15:33.692087   94171 main.go:141] libmachine: Reticulating splines...
	I0420 00:15:33.692094   94171 client.go:171] duration metric: took 25.849305358s to LocalClient.Create
	I0420 00:15:33.692118   94171 start.go:167] duration metric: took 25.849361585s to libmachine.API.Create "ha-371738"
	I0420 00:15:33.692131   94171 start.go:293] postStartSetup for "ha-371738-m02" (driver="kvm2")
	I0420 00:15:33.692145   94171 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 00:15:33.692176   94171 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:15:33.692399   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 00:15:33.692425   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:33.694378   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.694680   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.694710   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.694845   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:33.695030   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:33.695195   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:33.695311   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa Username:docker}
	I0420 00:15:33.777206   94171 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 00:15:33.781523   94171 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 00:15:33.781547   94171 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 00:15:33.781619   94171 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 00:15:33.781717   94171 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 00:15:33.781730   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /etc/ssl/certs/837422.pem
	I0420 00:15:33.781828   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 00:15:33.792378   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:15:33.818287   94171 start.go:296] duration metric: took 126.141852ms for postStartSetup
	I0420 00:15:33.818346   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetConfigRaw
	I0420 00:15:33.819026   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetIP
	I0420 00:15:33.821828   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.822285   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.822314   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.822576   94171 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:15:33.822762   94171 start.go:128] duration metric: took 25.998188301s to createHost
	I0420 00:15:33.822789   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:33.824995   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.825366   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.825394   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.825548   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:33.825726   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:33.825856   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:33.825990   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:33.826193   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:15:33.826344   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0420 00:15:33.826355   94171 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 00:15:33.926129   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713572133.910912741
	
	I0420 00:15:33.926157   94171 fix.go:216] guest clock: 1713572133.910912741
	I0420 00:15:33.926168   94171 fix.go:229] Guest: 2024-04-20 00:15:33.910912741 +0000 UTC Remote: 2024-04-20 00:15:33.822774494 +0000 UTC m=+83.634486162 (delta=88.138247ms)
	I0420 00:15:33.926187   94171 fix.go:200] guest clock delta is within tolerance: 88.138247ms
	I0420 00:15:33.926192   94171 start.go:83] releasing machines lock for "ha-371738-m02", held for 26.101808733s
	I0420 00:15:33.926213   94171 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:15:33.926499   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetIP
	I0420 00:15:33.929071   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.929522   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.929543   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.932165   94171 out.go:177] * Found network options:
	I0420 00:15:33.933543   94171 out.go:177]   - NO_PROXY=192.168.39.217
	W0420 00:15:33.934995   94171 proxy.go:119] fail to check proxy env: Error ip not in block
	I0420 00:15:33.935029   94171 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:15:33.935543   94171 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:15:33.935719   94171 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:15:33.935808   94171 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 00:15:33.935853   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	W0420 00:15:33.935917   94171 proxy.go:119] fail to check proxy env: Error ip not in block
	I0420 00:15:33.936006   94171 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 00:15:33.936020   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:33.938351   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.938565   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.938704   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.938720   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.938942   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:33.938952   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.938979   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.939143   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:33.939152   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:33.939355   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:33.939386   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:33.939495   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa Username:docker}
	I0420 00:15:33.939567   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:33.939687   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa Username:docker}
	I0420 00:15:34.183588   94171 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 00:15:34.190450   94171 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 00:15:34.190527   94171 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 00:15:34.207725   94171 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 00:15:34.207746   94171 start.go:494] detecting cgroup driver to use...
	I0420 00:15:34.207795   94171 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 00:15:34.225416   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 00:15:34.239344   94171 docker.go:217] disabling cri-docker service (if available) ...
	I0420 00:15:34.239396   94171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 00:15:34.255623   94171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 00:15:34.272311   94171 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 00:15:34.388599   94171 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 00:15:34.555101   94171 docker.go:233] disabling docker service ...
	I0420 00:15:34.555185   94171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 00:15:34.571384   94171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 00:15:34.584862   94171 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 00:15:34.713895   94171 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 00:15:34.843002   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 00:15:34.858864   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 00:15:34.879516   94171 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 00:15:34.879587   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:15:34.890626   94171 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 00:15:34.890692   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:15:34.902293   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:15:34.913463   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:15:34.924524   94171 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 00:15:34.936112   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:15:34.947267   94171 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:15:34.966535   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:15:34.977617   94171 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 00:15:34.987289   94171 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 00:15:34.987336   94171 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 00:15:35.001770   94171 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 00:15:35.013380   94171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:15:35.132279   94171 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 00:15:35.279321   94171 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 00:15:35.279416   94171 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 00:15:35.284623   94171 start.go:562] Will wait 60s for crictl version
	I0420 00:15:35.284683   94171 ssh_runner.go:195] Run: which crictl
	I0420 00:15:35.288850   94171 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 00:15:35.325397   94171 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 00:15:35.325472   94171 ssh_runner.go:195] Run: crio --version
	I0420 00:15:35.356336   94171 ssh_runner.go:195] Run: crio --version
	I0420 00:15:35.388553   94171 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 00:15:35.390133   94171 out.go:177]   - env NO_PROXY=192.168.39.217
	I0420 00:15:35.391263   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetIP
	I0420 00:15:35.394122   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:35.394503   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:35.394539   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:35.394745   94171 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0420 00:15:35.399139   94171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:15:35.412865   94171 mustload.go:65] Loading cluster: ha-371738
	I0420 00:15:35.413099   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:15:35.413403   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:35.413428   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:35.428152   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36737
	I0420 00:15:35.428568   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:35.429009   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:35.429029   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:35.429377   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:35.429556   94171 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:15:35.431068   94171 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:15:35.431368   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:35.431398   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:35.445434   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39481
	I0420 00:15:35.445841   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:35.446325   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:35.446356   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:35.446634   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:35.446824   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:15:35.447005   94171 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738 for IP: 192.168.39.48
	I0420 00:15:35.447016   94171 certs.go:194] generating shared ca certs ...
	I0420 00:15:35.447031   94171 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:15:35.447167   94171 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 00:15:35.447208   94171 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 00:15:35.447219   94171 certs.go:256] generating profile certs ...
	I0420 00:15:35.447291   94171 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key
	I0420 00:15:35.447316   94171 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.67002bff
	I0420 00:15:35.447336   94171 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.67002bff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.48 192.168.39.254]
	I0420 00:15:35.526118   94171 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.67002bff ...
	I0420 00:15:35.526149   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.67002bff: {Name:mk5a6afacdffd81cc24458df0cd2fcf66072f99f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:15:35.526333   94171 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.67002bff ...
	I0420 00:15:35.526350   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.67002bff: {Name:mkdbf1224bcff9fd3a1190522604ec463ca02a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:15:35.526451   94171 certs.go:381] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.67002bff -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt
	I0420 00:15:35.526589   94171 certs.go:385] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.67002bff -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key
	I0420 00:15:35.526717   94171 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key
	I0420 00:15:35.526735   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0420 00:15:35.526748   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0420 00:15:35.526761   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0420 00:15:35.526771   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0420 00:15:35.526782   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0420 00:15:35.526792   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0420 00:15:35.526801   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0420 00:15:35.526813   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0420 00:15:35.526865   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 00:15:35.526892   94171 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 00:15:35.526901   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 00:15:35.526920   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 00:15:35.526951   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 00:15:35.526971   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 00:15:35.527008   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:15:35.527032   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:15:35.527046   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem -> /usr/share/ca-certificates/83742.pem
	I0420 00:15:35.527058   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /usr/share/ca-certificates/837422.pem
	I0420 00:15:35.527090   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:15:35.529870   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:15:35.530323   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:15:35.530350   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:15:35.530658   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:15:35.530849   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:15:35.531021   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:15:35.531173   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:15:35.609690   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0420 00:15:35.615153   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0420 00:15:35.629469   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0420 00:15:35.634561   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0420 00:15:35.649926   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0420 00:15:35.655253   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0420 00:15:35.670201   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0420 00:15:35.674959   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0420 00:15:35.687122   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0420 00:15:35.692016   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0420 00:15:35.707599   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0420 00:15:35.712686   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0420 00:15:35.726635   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 00:15:35.754434   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 00:15:35.780962   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 00:15:35.806894   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 00:15:35.832911   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0420 00:15:35.860118   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0420 00:15:35.891694   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 00:15:35.917813   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 00:15:35.943651   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 00:15:35.969659   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 00:15:35.995643   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 00:15:36.022031   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0420 00:15:36.039822   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0420 00:15:36.058775   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0420 00:15:36.077102   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0420 00:15:36.095498   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0420 00:15:36.114478   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0420 00:15:36.135508   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0420 00:15:36.154815   94171 ssh_runner.go:195] Run: openssl version
	I0420 00:15:36.161381   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 00:15:36.173665   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:15:36.178745   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:15:36.178823   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:15:36.184809   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 00:15:36.196249   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 00:15:36.209703   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 00:15:36.215107   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 00:15:36.215149   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 00:15:36.221286   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 00:15:36.232842   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 00:15:36.244268   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 00:15:36.249153   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 00:15:36.249197   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 00:15:36.255323   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 00:15:36.267770   94171 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 00:15:36.273280   94171 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0420 00:15:36.273357   94171 kubeadm.go:928] updating node {m02 192.168.39.48 8443 v1.30.0 crio true true} ...
	I0420 00:15:36.273451   94171 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-371738-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 00:15:36.273480   94171 kube-vip.go:111] generating kube-vip config ...
	I0420 00:15:36.273517   94171 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0420 00:15:36.290259   94171 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0420 00:15:36.290328   94171 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0420 00:15:36.290379   94171 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 00:15:36.301623   94171 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0420 00:15:36.301675   94171 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0420 00:15:36.312406   94171 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0420 00:15:36.312430   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0420 00:15:36.312508   94171 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0420 00:15:36.312532   94171 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0420 00:15:36.312558   94171 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0420 00:15:36.318740   94171 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0420 00:15:36.318768   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0420 00:15:36.983078   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0420 00:15:36.983161   94171 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0420 00:15:36.989155   94171 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0420 00:15:36.989189   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0420 00:15:37.363032   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:15:37.379663   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0420 00:15:37.379738   94171 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0420 00:15:37.384472   94171 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0420 00:15:37.384504   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0420 00:15:37.851826   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0420 00:15:37.864931   94171 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0420 00:15:37.884017   94171 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 00:15:37.904291   94171 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0420 00:15:37.922824   94171 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0420 00:15:37.927295   94171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:15:37.941424   94171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:15:38.082599   94171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:15:38.103136   94171 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:15:38.103655   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:38.103699   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:38.119038   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I0420 00:15:38.119467   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:38.119966   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:38.119990   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:38.120416   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:38.120670   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:15:38.120871   94171 start.go:316] joinCluster: &{Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.16
8.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:15:38.120997   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0420 00:15:38.121023   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:15:38.124250   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:15:38.124735   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:15:38.124766   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:15:38.124882   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:15:38.125227   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:15:38.125400   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:15:38.125544   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:15:38.302630   94171 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:15:38.302694   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9o24la.0magggnm0kh9r5cv --discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-371738-m02 --control-plane --apiserver-advertise-address=192.168.39.48 --apiserver-bind-port=8443"
	I0420 00:16:01.876572   94171 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9o24la.0magggnm0kh9r5cv --discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-371738-m02 --control-plane --apiserver-advertise-address=192.168.39.48 --apiserver-bind-port=8443": (23.573845696s)
	I0420 00:16:01.876628   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0420 00:16:02.453472   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-371738-m02 minikube.k8s.io/updated_at=2024_04_20T00_16_02_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=ha-371738 minikube.k8s.io/primary=false
	I0420 00:16:02.600156   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-371738-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0420 00:16:02.726375   94171 start.go:318] duration metric: took 24.605498766s to joinCluster
	I0420 00:16:02.726459   94171 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:16:02.728078   94171 out.go:177] * Verifying Kubernetes components...
	I0420 00:16:02.726786   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:16:02.729388   94171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:16:02.984478   94171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:16:03.027056   94171 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 00:16:03.027286   94171 kapi.go:59] client config for ha-371738: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.crt", KeyFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key", CAFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0420 00:16:03.027347   94171 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.217:8443
	I0420 00:16:03.027701   94171 node_ready.go:35] waiting up to 6m0s for node "ha-371738-m02" to be "Ready" ...
	I0420 00:16:03.027819   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:03.027828   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:03.027835   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:03.027840   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:03.040597   94171 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0420 00:16:03.528251   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:03.528280   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:03.528292   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:03.528298   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:03.538500   94171 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0420 00:16:04.028366   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:04.028392   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:04.028402   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:04.028407   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:04.032845   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:04.527944   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:04.527973   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:04.527985   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:04.527990   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:04.534032   94171 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0420 00:16:05.028034   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:05.028059   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:05.028070   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:05.028075   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:05.037367   94171 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0420 00:16:05.038394   94171 node_ready.go:53] node "ha-371738-m02" has status "Ready":"False"
	I0420 00:16:05.528593   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:05.528619   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:05.528628   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:05.528635   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:05.533594   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:06.028799   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:06.028825   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:06.028834   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:06.028838   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:06.032412   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:06.528130   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:06.528160   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:06.528175   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:06.528182   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:06.531637   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:07.027993   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:07.028021   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.028030   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.028036   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.031484   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:07.528642   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:07.528671   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.528681   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.528688   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.532410   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:07.533072   94171 node_ready.go:49] node "ha-371738-m02" has status "Ready":"True"
	I0420 00:16:07.533096   94171 node_ready.go:38] duration metric: took 4.505362063s for node "ha-371738-m02" to be "Ready" ...
	I0420 00:16:07.533109   94171 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 00:16:07.533224   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:16:07.533238   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.533249   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.533257   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.542380   94171 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0420 00:16:07.550113   94171 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9hc82" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:07.550212   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9hc82
	I0420 00:16:07.550224   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.550233   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.550240   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.553841   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:07.554434   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:07.554450   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.554457   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.554462   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.558093   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:07.558719   94171 pod_ready.go:92] pod "coredns-7db6d8ff4d-9hc82" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:07.558738   94171 pod_ready.go:81] duration metric: took 8.5945ms for pod "coredns-7db6d8ff4d-9hc82" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:07.558747   94171 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jvvpr" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:07.558798   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jvvpr
	I0420 00:16:07.558807   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.558813   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.558817   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.566943   94171 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0420 00:16:07.568014   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:07.568030   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.568038   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.568043   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.571116   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:07.571834   94171 pod_ready.go:92] pod "coredns-7db6d8ff4d-jvvpr" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:07.571851   94171 pod_ready.go:81] duration metric: took 13.098517ms for pod "coredns-7db6d8ff4d-jvvpr" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:07.571860   94171 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:07.571921   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738
	I0420 00:16:07.571930   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.571937   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.571942   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.575504   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:07.576045   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:07.576058   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.576066   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.576069   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.578942   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:07.579568   94171 pod_ready.go:92] pod "etcd-ha-371738" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:07.579585   94171 pod_ready.go:81] duration metric: took 7.719539ms for pod "etcd-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:07.579593   94171 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:07.579649   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:07.579657   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.579663   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.579667   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.583027   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:07.583726   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:07.583740   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.583747   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.583751   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.586376   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:08.080395   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:08.080420   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:08.080428   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:08.080432   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:08.083902   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:08.084567   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:08.084585   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:08.084591   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:08.084594   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:08.087411   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:08.579816   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:08.579842   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:08.579849   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:08.579853   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:08.583538   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:08.584123   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:08.584139   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:08.584150   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:08.584155   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:08.587089   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:09.079861   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:09.079888   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:09.079895   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:09.079898   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:09.083662   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:09.084688   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:09.084704   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:09.084711   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:09.084716   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:09.087530   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:09.580509   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:09.580533   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:09.580541   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:09.580544   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:09.584943   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:09.585594   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:09.585610   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:09.585617   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:09.585621   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:09.588475   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:09.589151   94171 pod_ready.go:102] pod "etcd-ha-371738-m02" in "kube-system" namespace has status "Ready":"False"
	I0420 00:16:10.080454   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:10.080477   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:10.080485   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:10.080489   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:10.084366   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:10.085425   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:10.085439   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:10.085447   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:10.085456   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:10.088025   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:10.580769   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:10.580792   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:10.580801   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:10.580804   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:10.584972   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:10.586281   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:10.586294   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:10.586301   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:10.586305   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:10.589252   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:11.080491   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:11.080520   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:11.080532   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:11.080540   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:11.083840   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:11.084560   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:11.084576   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:11.084584   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:11.084588   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:11.087276   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:11.580006   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:11.580033   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:11.580046   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:11.580054   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:11.588962   94171 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0420 00:16:11.589636   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:11.589653   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:11.589661   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:11.589666   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:11.594181   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:11.594917   94171 pod_ready.go:102] pod "etcd-ha-371738-m02" in "kube-system" namespace has status "Ready":"False"
	I0420 00:16:12.080399   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:12.080426   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:12.080447   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:12.080452   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:12.082979   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:12.083841   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:12.083857   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:12.083863   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:12.083865   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:12.086264   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:12.580286   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:12.580309   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:12.580320   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:12.580325   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:12.584375   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:12.585372   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:12.585392   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:12.585401   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:12.585409   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:12.588051   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:13.080657   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:13.080682   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:13.080690   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:13.080694   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:13.084699   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:13.085544   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:13.085562   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:13.085569   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:13.085573   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:13.088305   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:13.579981   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:13.580006   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:13.580013   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:13.580017   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:13.583406   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:13.584270   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:13.584286   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:13.584296   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:13.584301   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:13.587678   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:14.080722   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:14.080745   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:14.080754   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:14.080757   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:14.084893   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:14.085577   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:14.085593   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:14.085603   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:14.085608   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:14.088897   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:14.089424   94171 pod_ready.go:102] pod "etcd-ha-371738-m02" in "kube-system" namespace has status "Ready":"False"
	I0420 00:16:14.579821   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:14.579850   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:14.579860   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:14.579864   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:14.585596   94171 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0420 00:16:14.586409   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:14.586427   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:14.586435   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:14.586439   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:14.589410   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:14.590076   94171 pod_ready.go:92] pod "etcd-ha-371738-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:14.590098   94171 pod_ready.go:81] duration metric: took 7.010496982s for pod "etcd-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:14.590130   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:14.590197   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738
	I0420 00:16:14.590208   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:14.590218   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:14.590225   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:14.592900   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:14.593681   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:14.593698   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:14.593708   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:14.593713   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:14.596210   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:14.597047   94171 pod_ready.go:92] pod "kube-apiserver-ha-371738" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:14.597069   94171 pod_ready.go:81] duration metric: took 6.926378ms for pod "kube-apiserver-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:14.597082   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:14.597143   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738-m02
	I0420 00:16:14.597154   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:14.597164   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:14.597173   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:14.599734   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:14.600476   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:14.600489   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:14.600495   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:14.600498   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:14.603031   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:15.098029   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738-m02
	I0420 00:16:15.098054   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:15.098061   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:15.098067   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:15.102974   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:15.104078   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:15.104095   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:15.104103   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:15.104106   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:15.107208   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:15.597830   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738-m02
	I0420 00:16:15.597855   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:15.597867   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:15.597872   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:15.601999   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:15.603984   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:15.603999   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:15.604007   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:15.604013   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:15.606658   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:16.097502   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738-m02
	I0420 00:16:16.097527   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.097535   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.097539   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.102180   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:16.103130   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:16.103153   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.103161   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.103165   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.106996   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:16.597927   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738-m02
	I0420 00:16:16.597953   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.597961   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.597965   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.602269   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:16.603505   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:16.603534   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.603545   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.603551   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.610277   94171 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0420 00:16:16.612863   94171 pod_ready.go:92] pod "kube-apiserver-ha-371738-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:16.612881   94171 pod_ready.go:81] duration metric: took 2.015792383s for pod "kube-apiserver-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:16.612892   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:16.612947   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-371738
	I0420 00:16:16.612954   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.612961   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.612964   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.620342   94171 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0420 00:16:16.620960   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:16.620975   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.620982   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.620985   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.623607   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:16.624210   94171 pod_ready.go:92] pod "kube-controller-manager-ha-371738" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:16.624225   94171 pod_ready.go:81] duration metric: took 11.32732ms for pod "kube-controller-manager-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:16.624234   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-59wls" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:16.624285   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-59wls
	I0420 00:16:16.624292   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.624299   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.624305   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.627444   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:16.628184   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:16.628195   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.628203   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.628208   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.630782   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:16.631249   94171 pod_ready.go:92] pod "kube-proxy-59wls" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:16.631264   94171 pod_ready.go:81] duration metric: took 7.02177ms for pod "kube-proxy-59wls" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:16.631271   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zw62l" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:16.631312   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw62l
	I0420 00:16:16.631317   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.631324   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.631327   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.640083   94171 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0420 00:16:16.728866   94171 request.go:629] Waited for 88.26916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:16.728945   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:16.728953   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.728964   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.728973   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.735095   94171 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0420 00:16:16.735892   94171 pod_ready.go:92] pod "kube-proxy-zw62l" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:16.735918   94171 pod_ready.go:81] duration metric: took 104.638962ms for pod "kube-proxy-zw62l" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:16.735932   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:16.928995   94171 request.go:629] Waited for 192.975571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738
	I0420 00:16:16.929089   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738
	I0420 00:16:16.929096   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.929112   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.929122   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.932863   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:17.129543   94171 request.go:629] Waited for 196.06387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:17.129615   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:17.129624   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:17.129643   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:17.129653   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:17.134397   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:17.135431   94171 pod_ready.go:92] pod "kube-scheduler-ha-371738" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:17.135449   94171 pod_ready.go:81] duration metric: took 399.509647ms for pod "kube-scheduler-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:17.135460   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:17.329650   94171 request.go:629] Waited for 194.095177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738-m02
	I0420 00:16:17.329720   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738-m02
	I0420 00:16:17.329727   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:17.329738   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:17.329746   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:17.333146   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:17.529289   94171 request.go:629] Waited for 195.373547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:17.529369   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:17.529380   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:17.529391   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:17.529409   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:17.537480   94171 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0420 00:16:17.538632   94171 pod_ready.go:92] pod "kube-scheduler-ha-371738-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:17.538656   94171 pod_ready.go:81] duration metric: took 403.188721ms for pod "kube-scheduler-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:17.538671   94171 pod_ready.go:38] duration metric: took 10.005521739s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 00:16:17.538693   94171 api_server.go:52] waiting for apiserver process to appear ...
	I0420 00:16:17.538762   94171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:16:17.558026   94171 api_server.go:72] duration metric: took 14.83152097s to wait for apiserver process to appear ...
	I0420 00:16:17.558054   94171 api_server.go:88] waiting for apiserver healthz status ...
	I0420 00:16:17.558078   94171 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0420 00:16:17.562889   94171 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0420 00:16:17.562975   94171 round_trippers.go:463] GET https://192.168.39.217:8443/version
	I0420 00:16:17.562988   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:17.563000   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:17.563011   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:17.564643   94171 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0420 00:16:17.564962   94171 api_server.go:141] control plane version: v1.30.0
	I0420 00:16:17.565007   94171 api_server.go:131] duration metric: took 6.943763ms to wait for apiserver health ...
	I0420 00:16:17.565018   94171 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 00:16:17.729433   94171 request.go:629] Waited for 164.318649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:16:17.729514   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:16:17.729521   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:17.729531   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:17.729539   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:17.736762   94171 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0420 00:16:17.742335   94171 system_pods.go:59] 17 kube-system pods found
	I0420 00:16:17.742367   94171 system_pods.go:61] "coredns-7db6d8ff4d-9hc82" [279d40d8-eb21-476c-ba36-bc7592777126] Running
	I0420 00:16:17.742372   94171 system_pods.go:61] "coredns-7db6d8ff4d-jvvpr" [104d5328-1f6a-4747-8e26-9a98e38dc1cc] Running
	I0420 00:16:17.742376   94171 system_pods.go:61] "etcd-ha-371738" [5e23c4a0-7c15-47b9-b722-82e61a10f286] Running
	I0420 00:16:17.742379   94171 system_pods.go:61] "etcd-ha-371738-m02" [712e8a6e-7007-4cf1-8a0c-4e33eeccebcd] Running
	I0420 00:16:17.742382   94171 system_pods.go:61] "kindnet-ggw7f" [2e0d1c1a-6fb4-4c3e-ae2b-41cfccaba2dd] Running
	I0420 00:16:17.742386   94171 system_pods.go:61] "kindnet-s87k2" [0820561f-f794-4ac5-8ce2-ae0cb4310c3e] Running
	I0420 00:16:17.742389   94171 system_pods.go:61] "kube-apiserver-ha-371738" [301ce02b-37b1-42ba-8a45-fbde327e2a02] Running
	I0420 00:16:17.742395   94171 system_pods.go:61] "kube-apiserver-ha-371738-m02" [a22f017a-e7b0-4748-9486-b52d35284584] Running
	I0420 00:16:17.742398   94171 system_pods.go:61] "kube-controller-manager-ha-371738" [bc03ed79-b024-46b1-af13-45a3def8bcae] Running
	I0420 00:16:17.742406   94171 system_pods.go:61] "kube-controller-manager-ha-371738-m02" [7b460bfb-bddf-46c0-a30c-f5e9757a32ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 00:16:17.742411   94171 system_pods.go:61] "kube-proxy-59wls" [722c6b7d-109b-4201-a5f1-c02a65befcf2] Running
	I0420 00:16:17.742415   94171 system_pods.go:61] "kube-proxy-zw62l" [dad72bfc-65c2-4007-9d5c-682ddf48c44d] Running
	I0420 00:16:17.742418   94171 system_pods.go:61] "kube-scheduler-ha-371738" [a3df56d3-c437-4ea9-b73d-2b22e93334b3] Running
	I0420 00:16:17.742422   94171 system_pods.go:61] "kube-scheduler-ha-371738-m02" [47dba6e4-cb4d-43e8-a173-06d13b08fd55] Running
	I0420 00:16:17.742425   94171 system_pods.go:61] "kube-vip-ha-371738" [8d162382-25bb-4393-8c45-a8487b571605] Running
	I0420 00:16:17.742428   94171 system_pods.go:61] "kube-vip-ha-371738-m02" [76331738-5bca-4724-939e-4c16a906e65b] Running
	I0420 00:16:17.742431   94171 system_pods.go:61] "storage-provisioner" [1d7b89d3-7cff-4258-8215-819971fa1b81] Running
	I0420 00:16:17.742440   94171 system_pods.go:74] duration metric: took 177.416016ms to wait for pod list to return data ...
	I0420 00:16:17.742448   94171 default_sa.go:34] waiting for default service account to be created ...
	I0420 00:16:17.929569   94171 request.go:629] Waited for 187.046792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0420 00:16:17.929628   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0420 00:16:17.929633   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:17.929640   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:17.929644   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:17.933818   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:17.934059   94171 default_sa.go:45] found service account: "default"
	I0420 00:16:17.934074   94171 default_sa.go:55] duration metric: took 191.619416ms for default service account to be created ...
	I0420 00:16:17.934083   94171 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 00:16:18.129599   94171 request.go:629] Waited for 195.448432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:16:18.129681   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:16:18.129687   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:18.129694   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:18.129698   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:18.139371   94171 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0420 00:16:18.145416   94171 system_pods.go:86] 17 kube-system pods found
	I0420 00:16:18.145442   94171 system_pods.go:89] "coredns-7db6d8ff4d-9hc82" [279d40d8-eb21-476c-ba36-bc7592777126] Running
	I0420 00:16:18.145447   94171 system_pods.go:89] "coredns-7db6d8ff4d-jvvpr" [104d5328-1f6a-4747-8e26-9a98e38dc1cc] Running
	I0420 00:16:18.145452   94171 system_pods.go:89] "etcd-ha-371738" [5e23c4a0-7c15-47b9-b722-82e61a10f286] Running
	I0420 00:16:18.145456   94171 system_pods.go:89] "etcd-ha-371738-m02" [712e8a6e-7007-4cf1-8a0c-4e33eeccebcd] Running
	I0420 00:16:18.145460   94171 system_pods.go:89] "kindnet-ggw7f" [2e0d1c1a-6fb4-4c3e-ae2b-41cfccaba2dd] Running
	I0420 00:16:18.145464   94171 system_pods.go:89] "kindnet-s87k2" [0820561f-f794-4ac5-8ce2-ae0cb4310c3e] Running
	I0420 00:16:18.145468   94171 system_pods.go:89] "kube-apiserver-ha-371738" [301ce02b-37b1-42ba-8a45-fbde327e2a02] Running
	I0420 00:16:18.145472   94171 system_pods.go:89] "kube-apiserver-ha-371738-m02" [a22f017a-e7b0-4748-9486-b52d35284584] Running
	I0420 00:16:18.145476   94171 system_pods.go:89] "kube-controller-manager-ha-371738" [bc03ed79-b024-46b1-af13-45a3def8bcae] Running
	I0420 00:16:18.145483   94171 system_pods.go:89] "kube-controller-manager-ha-371738-m02" [7b460bfb-bddf-46c0-a30c-f5e9757a32ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 00:16:18.145493   94171 system_pods.go:89] "kube-proxy-59wls" [722c6b7d-109b-4201-a5f1-c02a65befcf2] Running
	I0420 00:16:18.145498   94171 system_pods.go:89] "kube-proxy-zw62l" [dad72bfc-65c2-4007-9d5c-682ddf48c44d] Running
	I0420 00:16:18.145502   94171 system_pods.go:89] "kube-scheduler-ha-371738" [a3df56d3-c437-4ea9-b73d-2b22e93334b3] Running
	I0420 00:16:18.145506   94171 system_pods.go:89] "kube-scheduler-ha-371738-m02" [47dba6e4-cb4d-43e8-a173-06d13b08fd55] Running
	I0420 00:16:18.145513   94171 system_pods.go:89] "kube-vip-ha-371738" [8d162382-25bb-4393-8c45-a8487b571605] Running
	I0420 00:16:18.145516   94171 system_pods.go:89] "kube-vip-ha-371738-m02" [76331738-5bca-4724-939e-4c16a906e65b] Running
	I0420 00:16:18.145519   94171 system_pods.go:89] "storage-provisioner" [1d7b89d3-7cff-4258-8215-819971fa1b81] Running
	I0420 00:16:18.145527   94171 system_pods.go:126] duration metric: took 211.437795ms to wait for k8s-apps to be running ...
	I0420 00:16:18.145542   94171 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 00:16:18.145604   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:16:18.164996   94171 system_svc.go:56] duration metric: took 19.44571ms WaitForService to wait for kubelet
	I0420 00:16:18.165032   94171 kubeadm.go:576] duration metric: took 15.438532203s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:16:18.165056   94171 node_conditions.go:102] verifying NodePressure condition ...
	I0420 00:16:18.329498   94171 request.go:629] Waited for 164.361897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
	I0420 00:16:18.329578   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0420 00:16:18.329583   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:18.329592   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:18.329596   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:18.333499   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:18.334982   94171 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 00:16:18.335011   94171 node_conditions.go:123] node cpu capacity is 2
	I0420 00:16:18.335026   94171 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 00:16:18.335031   94171 node_conditions.go:123] node cpu capacity is 2
	I0420 00:16:18.335036   94171 node_conditions.go:105] duration metric: took 169.973195ms to run NodePressure ...
	I0420 00:16:18.335051   94171 start.go:240] waiting for startup goroutines ...
	I0420 00:16:18.335087   94171 start.go:254] writing updated cluster config ...
	I0420 00:16:18.337370   94171 out.go:177] 
	I0420 00:16:18.338988   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:16:18.339079   94171 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:16:18.340830   94171 out.go:177] * Starting "ha-371738-m03" control-plane node in "ha-371738" cluster
	I0420 00:16:18.342061   94171 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:16:18.342091   94171 cache.go:56] Caching tarball of preloaded images
	I0420 00:16:18.342186   94171 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 00:16:18.342197   94171 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 00:16:18.342283   94171 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:16:18.342449   94171 start.go:360] acquireMachinesLock for ha-371738-m03: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 00:16:18.342494   94171 start.go:364] duration metric: took 25.993µs to acquireMachinesLock for "ha-371738-m03"
	I0420 00:16:18.342509   94171 start.go:93] Provisioning new machine with config: &{Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:defau
lt APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:16:18.342597   94171 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0420 00:16:18.344180   94171 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0420 00:16:18.344259   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:16:18.344295   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:16:18.360912   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I0420 00:16:18.361559   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:16:18.362122   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:16:18.362150   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:16:18.362512   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:16:18.362706   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetMachineName
	I0420 00:16:18.362883   94171 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:16:18.363061   94171 start.go:159] libmachine.API.Create for "ha-371738" (driver="kvm2")
	I0420 00:16:18.363095   94171 client.go:168] LocalClient.Create starting
	I0420 00:16:18.363134   94171 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem
	I0420 00:16:18.363174   94171 main.go:141] libmachine: Decoding PEM data...
	I0420 00:16:18.363192   94171 main.go:141] libmachine: Parsing certificate...
	I0420 00:16:18.363260   94171 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem
	I0420 00:16:18.363290   94171 main.go:141] libmachine: Decoding PEM data...
	I0420 00:16:18.363307   94171 main.go:141] libmachine: Parsing certificate...
	I0420 00:16:18.363334   94171 main.go:141] libmachine: Running pre-create checks...
	I0420 00:16:18.363346   94171 main.go:141] libmachine: (ha-371738-m03) Calling .PreCreateCheck
	I0420 00:16:18.363530   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetConfigRaw
	I0420 00:16:18.363986   94171 main.go:141] libmachine: Creating machine...
	I0420 00:16:18.364003   94171 main.go:141] libmachine: (ha-371738-m03) Calling .Create
	I0420 00:16:18.364173   94171 main.go:141] libmachine: (ha-371738-m03) Creating KVM machine...
	I0420 00:16:18.365642   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found existing default KVM network
	I0420 00:16:18.365776   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found existing private KVM network mk-ha-371738
	I0420 00:16:18.365934   94171 main.go:141] libmachine: (ha-371738-m03) Setting up store path in /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03 ...
	I0420 00:16:18.365961   94171 main.go:141] libmachine: (ha-371738-m03) Building disk image from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0420 00:16:18.366012   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:18.365909   94971 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:16:18.366110   94171 main.go:141] libmachine: (ha-371738-m03) Downloading /home/jenkins/minikube-integration/18703-76456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0420 00:16:18.596347   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:18.596218   94971 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa...
	I0420 00:16:18.690070   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:18.689924   94971 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/ha-371738-m03.rawdisk...
	I0420 00:16:18.690110   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Writing magic tar header
	I0420 00:16:18.690125   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Writing SSH key tar header
	I0420 00:16:18.690138   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:18.690078   94971 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03 ...
	I0420 00:16:18.690246   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03
	I0420 00:16:18.690268   94171 main.go:141] libmachine: (ha-371738-m03) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03 (perms=drwx------)
	I0420 00:16:18.690276   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines
	I0420 00:16:18.690287   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:16:18.690296   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456
	I0420 00:16:18.690304   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0420 00:16:18.690312   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Checking permissions on dir: /home/jenkins
	I0420 00:16:18.690318   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Checking permissions on dir: /home
	I0420 00:16:18.690326   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Skipping /home - not owner
	I0420 00:16:18.690336   94171 main.go:141] libmachine: (ha-371738-m03) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines (perms=drwxr-xr-x)
	I0420 00:16:18.690351   94171 main.go:141] libmachine: (ha-371738-m03) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube (perms=drwxr-xr-x)
	I0420 00:16:18.690369   94171 main.go:141] libmachine: (ha-371738-m03) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456 (perms=drwxrwxr-x)
	I0420 00:16:18.690382   94171 main.go:141] libmachine: (ha-371738-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0420 00:16:18.690396   94171 main.go:141] libmachine: (ha-371738-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0420 00:16:18.690404   94171 main.go:141] libmachine: (ha-371738-m03) Creating domain...
	I0420 00:16:18.691283   94171 main.go:141] libmachine: (ha-371738-m03) define libvirt domain using xml: 
	I0420 00:16:18.691302   94171 main.go:141] libmachine: (ha-371738-m03) <domain type='kvm'>
	I0420 00:16:18.691309   94171 main.go:141] libmachine: (ha-371738-m03)   <name>ha-371738-m03</name>
	I0420 00:16:18.691318   94171 main.go:141] libmachine: (ha-371738-m03)   <memory unit='MiB'>2200</memory>
	I0420 00:16:18.691324   94171 main.go:141] libmachine: (ha-371738-m03)   <vcpu>2</vcpu>
	I0420 00:16:18.691329   94171 main.go:141] libmachine: (ha-371738-m03)   <features>
	I0420 00:16:18.691334   94171 main.go:141] libmachine: (ha-371738-m03)     <acpi/>
	I0420 00:16:18.691341   94171 main.go:141] libmachine: (ha-371738-m03)     <apic/>
	I0420 00:16:18.691346   94171 main.go:141] libmachine: (ha-371738-m03)     <pae/>
	I0420 00:16:18.691352   94171 main.go:141] libmachine: (ha-371738-m03)     
	I0420 00:16:18.691358   94171 main.go:141] libmachine: (ha-371738-m03)   </features>
	I0420 00:16:18.691365   94171 main.go:141] libmachine: (ha-371738-m03)   <cpu mode='host-passthrough'>
	I0420 00:16:18.691373   94171 main.go:141] libmachine: (ha-371738-m03)   
	I0420 00:16:18.691377   94171 main.go:141] libmachine: (ha-371738-m03)   </cpu>
	I0420 00:16:18.691383   94171 main.go:141] libmachine: (ha-371738-m03)   <os>
	I0420 00:16:18.691397   94171 main.go:141] libmachine: (ha-371738-m03)     <type>hvm</type>
	I0420 00:16:18.691407   94171 main.go:141] libmachine: (ha-371738-m03)     <boot dev='cdrom'/>
	I0420 00:16:18.691439   94171 main.go:141] libmachine: (ha-371738-m03)     <boot dev='hd'/>
	I0420 00:16:18.691459   94171 main.go:141] libmachine: (ha-371738-m03)     <bootmenu enable='no'/>
	I0420 00:16:18.691468   94171 main.go:141] libmachine: (ha-371738-m03)   </os>
	I0420 00:16:18.691474   94171 main.go:141] libmachine: (ha-371738-m03)   <devices>
	I0420 00:16:18.691493   94171 main.go:141] libmachine: (ha-371738-m03)     <disk type='file' device='cdrom'>
	I0420 00:16:18.691516   94171 main.go:141] libmachine: (ha-371738-m03)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/boot2docker.iso'/>
	I0420 00:16:18.691554   94171 main.go:141] libmachine: (ha-371738-m03)       <target dev='hdc' bus='scsi'/>
	I0420 00:16:18.691585   94171 main.go:141] libmachine: (ha-371738-m03)       <readonly/>
	I0420 00:16:18.691595   94171 main.go:141] libmachine: (ha-371738-m03)     </disk>
	I0420 00:16:18.691604   94171 main.go:141] libmachine: (ha-371738-m03)     <disk type='file' device='disk'>
	I0420 00:16:18.691616   94171 main.go:141] libmachine: (ha-371738-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0420 00:16:18.691632   94171 main.go:141] libmachine: (ha-371738-m03)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/ha-371738-m03.rawdisk'/>
	I0420 00:16:18.691645   94171 main.go:141] libmachine: (ha-371738-m03)       <target dev='hda' bus='virtio'/>
	I0420 00:16:18.691657   94171 main.go:141] libmachine: (ha-371738-m03)     </disk>
	I0420 00:16:18.691669   94171 main.go:141] libmachine: (ha-371738-m03)     <interface type='network'>
	I0420 00:16:18.691679   94171 main.go:141] libmachine: (ha-371738-m03)       <source network='mk-ha-371738'/>
	I0420 00:16:18.691684   94171 main.go:141] libmachine: (ha-371738-m03)       <model type='virtio'/>
	I0420 00:16:18.691692   94171 main.go:141] libmachine: (ha-371738-m03)     </interface>
	I0420 00:16:18.691697   94171 main.go:141] libmachine: (ha-371738-m03)     <interface type='network'>
	I0420 00:16:18.691709   94171 main.go:141] libmachine: (ha-371738-m03)       <source network='default'/>
	I0420 00:16:18.691717   94171 main.go:141] libmachine: (ha-371738-m03)       <model type='virtio'/>
	I0420 00:16:18.691721   94171 main.go:141] libmachine: (ha-371738-m03)     </interface>
	I0420 00:16:18.691727   94171 main.go:141] libmachine: (ha-371738-m03)     <serial type='pty'>
	I0420 00:16:18.691734   94171 main.go:141] libmachine: (ha-371738-m03)       <target port='0'/>
	I0420 00:16:18.691739   94171 main.go:141] libmachine: (ha-371738-m03)     </serial>
	I0420 00:16:18.691748   94171 main.go:141] libmachine: (ha-371738-m03)     <console type='pty'>
	I0420 00:16:18.691753   94171 main.go:141] libmachine: (ha-371738-m03)       <target type='serial' port='0'/>
	I0420 00:16:18.691760   94171 main.go:141] libmachine: (ha-371738-m03)     </console>
	I0420 00:16:18.691766   94171 main.go:141] libmachine: (ha-371738-m03)     <rng model='virtio'>
	I0420 00:16:18.691775   94171 main.go:141] libmachine: (ha-371738-m03)       <backend model='random'>/dev/random</backend>
	I0420 00:16:18.691780   94171 main.go:141] libmachine: (ha-371738-m03)     </rng>
	I0420 00:16:18.691787   94171 main.go:141] libmachine: (ha-371738-m03)     
	I0420 00:16:18.691791   94171 main.go:141] libmachine: (ha-371738-m03)     
	I0420 00:16:18.691796   94171 main.go:141] libmachine: (ha-371738-m03)   </devices>
	I0420 00:16:18.691801   94171 main.go:141] libmachine: (ha-371738-m03) </domain>
	I0420 00:16:18.691808   94171 main.go:141] libmachine: (ha-371738-m03) 
	I0420 00:16:18.698609   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:9a:ab:84 in network default
	I0420 00:16:18.699178   94171 main.go:141] libmachine: (ha-371738-m03) Ensuring networks are active...
	I0420 00:16:18.699212   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:18.700006   94171 main.go:141] libmachine: (ha-371738-m03) Ensuring network default is active
	I0420 00:16:18.700336   94171 main.go:141] libmachine: (ha-371738-m03) Ensuring network mk-ha-371738 is active
	I0420 00:16:18.700677   94171 main.go:141] libmachine: (ha-371738-m03) Getting domain xml...
	I0420 00:16:18.701358   94171 main.go:141] libmachine: (ha-371738-m03) Creating domain...
	I0420 00:16:19.935037   94171 main.go:141] libmachine: (ha-371738-m03) Waiting to get IP...
	I0420 00:16:19.935860   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:19.936363   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:19.936391   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:19.936337   94971 retry.go:31] will retry after 252.638179ms: waiting for machine to come up
	I0420 00:16:20.190786   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:20.191318   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:20.191350   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:20.191278   94971 retry.go:31] will retry after 315.019844ms: waiting for machine to come up
	I0420 00:16:20.507924   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:20.508352   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:20.508386   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:20.508281   94971 retry.go:31] will retry after 394.142198ms: waiting for machine to come up
	I0420 00:16:20.903536   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:20.904177   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:20.904215   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:20.904133   94971 retry.go:31] will retry after 508.732448ms: waiting for machine to come up
	I0420 00:16:21.414506   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:21.415012   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:21.415046   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:21.414949   94971 retry.go:31] will retry after 668.372993ms: waiting for machine to come up
	I0420 00:16:22.084735   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:22.085283   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:22.085305   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:22.085242   94971 retry.go:31] will retry after 684.969185ms: waiting for machine to come up
	I0420 00:16:22.771773   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:22.772407   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:22.772438   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:22.772356   94971 retry.go:31] will retry after 829.690915ms: waiting for machine to come up
	I0420 00:16:23.603601   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:23.604083   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:23.604111   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:23.604023   94971 retry.go:31] will retry after 1.241006066s: waiting for machine to come up
	I0420 00:16:24.846365   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:24.846812   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:24.846834   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:24.846780   94971 retry.go:31] will retry after 1.636439727s: waiting for machine to come up
	I0420 00:16:26.485446   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:26.485860   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:26.485901   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:26.485809   94971 retry.go:31] will retry after 2.040758446s: waiting for machine to come up
	I0420 00:16:28.528569   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:28.529195   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:28.529226   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:28.529141   94971 retry.go:31] will retry after 2.173228331s: waiting for machine to come up
	I0420 00:16:30.704551   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:30.705015   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:30.705045   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:30.704968   94971 retry.go:31] will retry after 2.195131281s: waiting for machine to come up
	I0420 00:16:32.902260   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:32.902681   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:32.902706   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:32.902637   94971 retry.go:31] will retry after 4.511428582s: waiting for machine to come up
	I0420 00:16:37.418440   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:37.418811   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:37.418847   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:37.418754   94971 retry.go:31] will retry after 5.620123819s: waiting for machine to come up
	I0420 00:16:43.043791   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.044254   94171 main.go:141] libmachine: (ha-371738-m03) Found IP for machine: 192.168.39.253
	I0420 00:16:43.044277   94171 main.go:141] libmachine: (ha-371738-m03) Reserving static IP address...
	I0420 00:16:43.044292   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has current primary IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.044837   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find host DHCP lease matching {name: "ha-371738-m03", mac: "52:54:00:cc:e5:aa", ip: "192.168.39.253"} in network mk-ha-371738
	I0420 00:16:43.117958   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Getting to WaitForSSH function...
	I0420 00:16:43.117991   94171 main.go:141] libmachine: (ha-371738-m03) Reserved static IP address: 192.168.39.253
	I0420 00:16:43.118006   94171 main.go:141] libmachine: (ha-371738-m03) Waiting for SSH to be available...
	I0420 00:16:43.120733   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.121205   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:43.121235   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.121411   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Using SSH client type: external
	I0420 00:16:43.121442   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa (-rw-------)
	I0420 00:16:43.121473   94171 main.go:141] libmachine: (ha-371738-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 00:16:43.121493   94171 main.go:141] libmachine: (ha-371738-m03) DBG | About to run SSH command:
	I0420 00:16:43.121514   94171 main.go:141] libmachine: (ha-371738-m03) DBG | exit 0
	I0420 00:16:43.250207   94171 main.go:141] libmachine: (ha-371738-m03) DBG | SSH cmd err, output: <nil>: 
	I0420 00:16:43.250498   94171 main.go:141] libmachine: (ha-371738-m03) KVM machine creation complete!
	I0420 00:16:43.250794   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetConfigRaw
	I0420 00:16:43.251384   94171 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:16:43.251599   94171 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:16:43.251771   94171 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0420 00:16:43.251790   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetState
	I0420 00:16:43.253233   94171 main.go:141] libmachine: Detecting operating system of created instance...
	I0420 00:16:43.253251   94171 main.go:141] libmachine: Waiting for SSH to be available...
	I0420 00:16:43.253260   94171 main.go:141] libmachine: Getting to WaitForSSH function...
	I0420 00:16:43.253273   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:43.255679   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.256015   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:43.256049   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.256210   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:43.256409   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:43.256620   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:43.256760   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:43.256895   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:16:43.257137   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0420 00:16:43.257154   94171 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0420 00:16:43.356613   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:16:43.356633   94171 main.go:141] libmachine: Detecting the provisioner...
	I0420 00:16:43.356641   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:43.360590   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.361070   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:43.361104   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.361245   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:43.361479   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:43.361675   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:43.361828   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:43.362065   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:16:43.362272   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0420 00:16:43.362290   94171 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0420 00:16:43.463052   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0420 00:16:43.463118   94171 main.go:141] libmachine: found compatible host: buildroot
	I0420 00:16:43.463125   94171 main.go:141] libmachine: Provisioning with buildroot...
	I0420 00:16:43.463132   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetMachineName
	I0420 00:16:43.463434   94171 buildroot.go:166] provisioning hostname "ha-371738-m03"
	I0420 00:16:43.463458   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetMachineName
	I0420 00:16:43.463668   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:43.466501   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.466893   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:43.466924   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.467103   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:43.467289   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:43.467484   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:43.467645   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:43.467857   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:16:43.468061   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0420 00:16:43.468084   94171 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-371738-m03 && echo "ha-371738-m03" | sudo tee /etc/hostname
	I0420 00:16:43.591113   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-371738-m03
	
	I0420 00:16:43.591144   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:43.593966   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.594333   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:43.594366   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.594535   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:43.594715   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:43.594933   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:43.595134   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:43.595361   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:16:43.595521   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0420 00:16:43.595537   94171 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-371738-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-371738-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-371738-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 00:16:43.707667   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:16:43.707701   94171 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 00:16:43.707721   94171 buildroot.go:174] setting up certificates
	I0420 00:16:43.707734   94171 provision.go:84] configureAuth start
	I0420 00:16:43.707747   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetMachineName
	I0420 00:16:43.708065   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetIP
	I0420 00:16:43.710910   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.711340   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:43.711369   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.711533   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:43.713969   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.714365   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:43.714391   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.714545   94171 provision.go:143] copyHostCerts
	I0420 00:16:43.714580   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:16:43.714629   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 00:16:43.714638   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:16:43.714715   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 00:16:43.714816   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:16:43.714841   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 00:16:43.714847   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:16:43.714885   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 00:16:43.714947   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:16:43.714970   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 00:16:43.714980   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:16:43.715010   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 00:16:43.715078   94171 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.ha-371738-m03 san=[127.0.0.1 192.168.39.253 ha-371738-m03 localhost minikube]
	I0420 00:16:44.053765   94171 provision.go:177] copyRemoteCerts
	I0420 00:16:44.053828   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 00:16:44.053856   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:44.056720   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.057090   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.057127   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.057288   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:44.057544   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:44.057702   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:44.057881   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:16:44.145602   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0420 00:16:44.145673   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 00:16:44.173227   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0420 00:16:44.173299   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0420 00:16:44.200252   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0420 00:16:44.200319   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 00:16:44.227652   94171 provision.go:87] duration metric: took 519.904112ms to configureAuth
	I0420 00:16:44.227683   94171 buildroot.go:189] setting minikube options for container-runtime
	I0420 00:16:44.227875   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:16:44.227956   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:44.230922   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.231348   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.231376   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.231563   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:44.231771   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:44.231958   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:44.232122   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:44.232283   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:16:44.232458   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0420 00:16:44.232475   94171 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 00:16:44.505879   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 00:16:44.505914   94171 main.go:141] libmachine: Checking connection to Docker...
	I0420 00:16:44.505925   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetURL
	I0420 00:16:44.507319   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Using libvirt version 6000000
	I0420 00:16:44.509562   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.509911   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.509940   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.510151   94171 main.go:141] libmachine: Docker is up and running!
	I0420 00:16:44.510168   94171 main.go:141] libmachine: Reticulating splines...
	I0420 00:16:44.510176   94171 client.go:171] duration metric: took 26.147070641s to LocalClient.Create
	I0420 00:16:44.510196   94171 start.go:167] duration metric: took 26.147135792s to libmachine.API.Create "ha-371738"
	I0420 00:16:44.510206   94171 start.go:293] postStartSetup for "ha-371738-m03" (driver="kvm2")
	I0420 00:16:44.510215   94171 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 00:16:44.510242   94171 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:16:44.510460   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 00:16:44.510486   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:44.512596   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.512922   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.512952   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.513048   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:44.513227   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:44.513394   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:44.513525   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:16:44.593717   94171 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 00:16:44.598653   94171 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 00:16:44.598679   94171 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 00:16:44.598752   94171 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 00:16:44.598845   94171 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 00:16:44.598857   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /etc/ssl/certs/837422.pem
	I0420 00:16:44.598955   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 00:16:44.610662   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:16:44.636626   94171 start.go:296] duration metric: took 126.40507ms for postStartSetup
	I0420 00:16:44.636684   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetConfigRaw
	I0420 00:16:44.637394   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetIP
	I0420 00:16:44.640744   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.641145   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.641167   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.641457   94171 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:16:44.641685   94171 start.go:128] duration metric: took 26.2990747s to createHost
	I0420 00:16:44.641717   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:44.644013   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.644493   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.644517   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.644655   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:44.644865   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:44.645038   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:44.645189   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:44.645379   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:16:44.645532   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0420 00:16:44.645543   94171 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 00:16:44.751198   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713572204.724524131
	
	I0420 00:16:44.751221   94171 fix.go:216] guest clock: 1713572204.724524131
	I0420 00:16:44.751231   94171 fix.go:229] Guest: 2024-04-20 00:16:44.724524131 +0000 UTC Remote: 2024-04-20 00:16:44.641701819 +0000 UTC m=+154.453413482 (delta=82.822312ms)
	I0420 00:16:44.751253   94171 fix.go:200] guest clock delta is within tolerance: 82.822312ms
	I0420 00:16:44.751260   94171 start.go:83] releasing machines lock for "ha-371738-m03", held for 26.408759008s
	I0420 00:16:44.751282   94171 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:16:44.751568   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetIP
	I0420 00:16:44.753935   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.754331   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.754361   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.756720   94171 out.go:177] * Found network options:
	I0420 00:16:44.757982   94171 out.go:177]   - NO_PROXY=192.168.39.217,192.168.39.48
	W0420 00:16:44.759243   94171 proxy.go:119] fail to check proxy env: Error ip not in block
	W0420 00:16:44.759265   94171 proxy.go:119] fail to check proxy env: Error ip not in block
	I0420 00:16:44.759277   94171 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:16:44.759767   94171 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:16:44.759976   94171 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:16:44.760084   94171 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 00:16:44.760141   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	W0420 00:16:44.760214   94171 proxy.go:119] fail to check proxy env: Error ip not in block
	W0420 00:16:44.760242   94171 proxy.go:119] fail to check proxy env: Error ip not in block
	I0420 00:16:44.760342   94171 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 00:16:44.760368   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:44.763019   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.763210   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.763428   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.763453   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.763597   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:44.763757   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.763763   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:44.763781   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.763965   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:44.763989   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:44.764165   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:44.764191   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:16:44.764304   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:44.764520   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:16:45.000306   94171 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 00:16:45.008157   94171 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 00:16:45.008266   94171 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 00:16:45.027279   94171 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 00:16:45.027307   94171 start.go:494] detecting cgroup driver to use...
	I0420 00:16:45.027381   94171 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 00:16:45.044536   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 00:16:45.059602   94171 docker.go:217] disabling cri-docker service (if available) ...
	I0420 00:16:45.059655   94171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 00:16:45.074069   94171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 00:16:45.088108   94171 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 00:16:45.215697   94171 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 00:16:45.378103   94171 docker.go:233] disabling docker service ...
	I0420 00:16:45.378185   94171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 00:16:45.395365   94171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 00:16:45.409169   94171 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 00:16:45.557032   94171 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 00:16:45.690417   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 00:16:45.707696   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 00:16:45.729031   94171 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 00:16:45.729091   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:16:45.741885   94171 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 00:16:45.741960   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:16:45.753900   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:16:45.765056   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:16:45.778946   94171 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 00:16:45.792094   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:16:45.804565   94171 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:16:45.824863   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:16:45.836644   94171 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 00:16:45.847812   94171 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 00:16:45.847875   94171 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 00:16:45.863416   94171 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 00:16:45.873866   94171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:16:46.009171   94171 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 00:16:46.159075   94171 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 00:16:46.159146   94171 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 00:16:46.164804   94171 start.go:562] Will wait 60s for crictl version
	I0420 00:16:46.164851   94171 ssh_runner.go:195] Run: which crictl
	I0420 00:16:46.169388   94171 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 00:16:46.208706   94171 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 00:16:46.208793   94171 ssh_runner.go:195] Run: crio --version
	I0420 00:16:46.242307   94171 ssh_runner.go:195] Run: crio --version
	I0420 00:16:46.274164   94171 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 00:16:46.275633   94171 out.go:177]   - env NO_PROXY=192.168.39.217
	I0420 00:16:46.276957   94171 out.go:177]   - env NO_PROXY=192.168.39.217,192.168.39.48
	I0420 00:16:46.278171   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetIP
	I0420 00:16:46.280769   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:46.281128   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:46.281150   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:46.281401   94171 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0420 00:16:46.285943   94171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:16:46.300171   94171 mustload.go:65] Loading cluster: ha-371738
	I0420 00:16:46.300379   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:16:46.300630   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:16:46.300668   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:16:46.315016   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36657
	I0420 00:16:46.315401   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:16:46.315881   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:16:46.315907   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:16:46.316223   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:16:46.316431   94171 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:16:46.318071   94171 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:16:46.318335   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:16:46.318367   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:16:46.332056   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44045
	I0420 00:16:46.332425   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:16:46.332843   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:16:46.332864   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:16:46.333148   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:16:46.333356   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:16:46.333529   94171 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738 for IP: 192.168.39.253
	I0420 00:16:46.333548   94171 certs.go:194] generating shared ca certs ...
	I0420 00:16:46.333566   94171 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:16:46.333716   94171 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 00:16:46.333763   94171 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 00:16:46.333777   94171 certs.go:256] generating profile certs ...
	I0420 00:16:46.333870   94171 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key
	I0420 00:16:46.333902   94171 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.441f7660
	I0420 00:16:46.333921   94171 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.441f7660 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.48 192.168.39.253 192.168.39.254]
	I0420 00:16:46.571466   94171 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.441f7660 ...
	I0420 00:16:46.571502   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.441f7660: {Name:mk5163288e441a9f3612764637090483eba4cfc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:16:46.571738   94171 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.441f7660 ...
	I0420 00:16:46.571765   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.441f7660: {Name:mk7b6be0777ba3300f48a9e2cc1b97a759a2b430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:16:46.571878   94171 certs.go:381] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.441f7660 -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt
	I0420 00:16:46.572024   94171 certs.go:385] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.441f7660 -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key
	I0420 00:16:46.572171   94171 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key
	I0420 00:16:46.572190   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0420 00:16:46.572204   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0420 00:16:46.572219   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0420 00:16:46.572235   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0420 00:16:46.572254   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0420 00:16:46.572271   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0420 00:16:46.572286   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0420 00:16:46.572299   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0420 00:16:46.572347   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 00:16:46.572377   94171 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 00:16:46.572388   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 00:16:46.572410   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 00:16:46.572441   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 00:16:46.572462   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 00:16:46.572519   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:16:46.572567   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:16:46.572595   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem -> /usr/share/ca-certificates/83742.pem
	I0420 00:16:46.572616   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /usr/share/ca-certificates/837422.pem
	I0420 00:16:46.572666   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:16:46.575877   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:16:46.576324   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:16:46.576350   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:16:46.576531   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:16:46.576716   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:16:46.576910   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:16:46.577055   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:16:46.657596   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0420 00:16:46.664152   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0420 00:16:46.681082   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0420 00:16:46.687658   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0420 00:16:46.700993   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0420 00:16:46.706697   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0420 00:16:46.719046   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0420 00:16:46.724454   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0420 00:16:46.738960   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0420 00:16:46.744072   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0420 00:16:46.758038   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0420 00:16:46.763486   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0420 00:16:46.776597   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 00:16:46.806861   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 00:16:46.834440   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 00:16:46.860808   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 00:16:46.886262   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0420 00:16:46.912856   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0420 00:16:46.938598   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 00:16:46.963855   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 00:16:46.991253   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 00:16:47.018911   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 00:16:47.045377   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 00:16:47.075192   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0420 00:16:47.094910   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0420 00:16:47.114705   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0420 00:16:47.134134   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0420 00:16:47.154366   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0420 00:16:47.174174   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0420 00:16:47.193905   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0420 00:16:47.212202   94171 ssh_runner.go:195] Run: openssl version
	I0420 00:16:47.218244   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 00:16:47.230302   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 00:16:47.234877   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 00:16:47.234916   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 00:16:47.240935   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 00:16:47.253253   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 00:16:47.265076   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:16:47.269789   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:16:47.269828   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:16:47.275781   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 00:16:47.288687   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 00:16:47.301056   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 00:16:47.306160   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 00:16:47.306218   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 00:16:47.312169   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 00:16:47.324165   94171 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 00:16:47.328487   94171 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0420 00:16:47.328544   94171 kubeadm.go:928] updating node {m03 192.168.39.253 8443 v1.30.0 crio true true} ...
	I0420 00:16:47.328643   94171 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-371738-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 00:16:47.328671   94171 kube-vip.go:111] generating kube-vip config ...
	I0420 00:16:47.328705   94171 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0420 00:16:47.348476   94171 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0420 00:16:47.348546   94171 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0420 00:16:47.348613   94171 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 00:16:47.359896   94171 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0420 00:16:47.359953   94171 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0420 00:16:47.370514   94171 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0420 00:16:47.370541   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0420 00:16:47.370601   94171 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0420 00:16:47.370650   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:16:47.370601   94171 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0420 00:16:47.370725   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0420 00:16:47.370767   94171 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0420 00:16:47.370606   94171 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0420 00:16:47.387017   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0420 00:16:47.387035   94171 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0420 00:16:47.387053   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0420 00:16:47.387084   94171 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0420 00:16:47.387113   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0420 00:16:47.387094   94171 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0420 00:16:47.424078   94171 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0420 00:16:47.424121   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0420 00:16:48.380427   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0420 00:16:48.390505   94171 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0420 00:16:48.409744   94171 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 00:16:48.428990   94171 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0420 00:16:48.448874   94171 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0420 00:16:48.453592   94171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:16:48.466689   94171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:16:48.593429   94171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:16:48.613993   94171 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:16:48.614349   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:16:48.614395   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:16:48.630644   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34149
	I0420 00:16:48.631092   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:16:48.631551   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:16:48.631574   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:16:48.632004   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:16:48.632250   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:16:48.632443   94171 start.go:316] joinCluster: &{Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.16
8.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:fa
lse kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:16:48.632592   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0420 00:16:48.632627   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:16:48.635807   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:16:48.636274   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:16:48.636311   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:16:48.636490   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:16:48.636674   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:16:48.636846   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:16:48.636988   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:16:48.812312   94171 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:16:48.812382   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pf094g.f6vyxymxfplz8bcz --discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-371738-m03 --control-plane --apiserver-advertise-address=192.168.39.253 --apiserver-bind-port=8443"
	I0420 00:17:14.043770   94171 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pf094g.f6vyxymxfplz8bcz --discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-371738-m03 --control-plane --apiserver-advertise-address=192.168.39.253 --apiserver-bind-port=8443": (25.231352321s)
	I0420 00:17:14.043833   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0420 00:17:14.522012   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-371738-m03 minikube.k8s.io/updated_at=2024_04_20T00_17_14_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=ha-371738 minikube.k8s.io/primary=false
	I0420 00:17:14.653111   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-371738-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0420 00:17:14.795721   94171 start.go:318] duration metric: took 26.163270633s to joinCluster
	I0420 00:17:14.795813   94171 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:17:14.797711   94171 out.go:177] * Verifying Kubernetes components...
	I0420 00:17:14.796145   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:17:14.799494   94171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:17:15.074891   94171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:17:15.148253   94171 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 00:17:15.148627   94171 kapi.go:59] client config for ha-371738: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.crt", KeyFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key", CAFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0420 00:17:15.148716   94171 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.217:8443
	I0420 00:17:15.149022   94171 node_ready.go:35] waiting up to 6m0s for node "ha-371738-m03" to be "Ready" ...
	I0420 00:17:15.149116   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:15.149127   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:15.149135   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:15.149144   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:15.153213   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:15.649865   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:15.649887   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:15.649895   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:15.649900   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:15.653392   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:16.150122   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:16.150151   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:16.150163   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:16.150169   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:16.154626   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:16.649608   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:16.649639   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:16.649650   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:16.649655   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:16.654169   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:17.149437   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:17.149467   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:17.149478   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:17.149485   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:17.152912   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:17.153692   94171 node_ready.go:53] node "ha-371738-m03" has status "Ready":"False"
	I0420 00:17:17.650093   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:17.650120   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:17.650131   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:17.650138   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:17.653930   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:18.149848   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:18.149875   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:18.149886   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:18.149892   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:18.153841   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:18.649637   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:18.649665   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:18.649675   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:18.649679   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:18.653420   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:19.149337   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:19.149360   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.149368   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.149373   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.153180   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:19.154190   94171 node_ready.go:49] node "ha-371738-m03" has status "Ready":"True"
	I0420 00:17:19.154213   94171 node_ready.go:38] duration metric: took 4.005171084s for node "ha-371738-m03" to be "Ready" ...
	I0420 00:17:19.154225   94171 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 00:17:19.154295   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:17:19.154309   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.154320   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.154328   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.161220   94171 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0420 00:17:19.170221   94171 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9hc82" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.170306   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9hc82
	I0420 00:17:19.170318   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.170325   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.170329   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.174058   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:19.174629   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:19.174647   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.174656   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.174661   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.177880   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:19.178568   94171 pod_ready.go:92] pod "coredns-7db6d8ff4d-9hc82" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:19.178599   94171 pod_ready.go:81] duration metric: took 8.345138ms for pod "coredns-7db6d8ff4d-9hc82" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.178616   94171 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jvvpr" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.178699   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jvvpr
	I0420 00:17:19.178710   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.178720   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.178727   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.181883   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:19.182845   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:19.182867   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.182878   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.182884   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.185713   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:19.186730   94171 pod_ready.go:92] pod "coredns-7db6d8ff4d-jvvpr" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:19.186745   94171 pod_ready.go:81] duration metric: took 8.121891ms for pod "coredns-7db6d8ff4d-jvvpr" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.186758   94171 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.186810   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738
	I0420 00:17:19.186819   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.186826   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.186832   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.191012   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:19.193243   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:19.193259   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.193266   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.193270   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.195909   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:19.196534   94171 pod_ready.go:92] pod "etcd-ha-371738" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:19.196551   94171 pod_ready.go:81] duration metric: took 9.786532ms for pod "etcd-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.196561   94171 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.196627   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:17:19.196637   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.196647   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.196654   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.199704   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:19.200922   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:17:19.200947   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.200958   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.200964   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.206449   94171 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0420 00:17:19.207537   94171 pod_ready.go:92] pod "etcd-ha-371738-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:19.207556   94171 pod_ready.go:81] duration metric: took 10.986108ms for pod "etcd-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.207567   94171 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-371738-m03" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.349944   94171 request.go:629] Waited for 142.27904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:19.350026   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:19.350034   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.350045   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.350052   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.353760   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:19.549955   94171 request.go:629] Waited for 195.385232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:19.550011   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:19.550016   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.550024   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.550031   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.553053   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:19.750003   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:19.750030   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.750042   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.750047   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.754300   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:19.950124   94171 request.go:629] Waited for 194.356929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:19.950198   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:19.950205   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.950215   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.950222   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.954126   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:20.207997   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:20.208019   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:20.208027   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:20.208032   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:20.211245   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:20.350148   94171 request.go:629] Waited for 138.082811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:20.350235   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:20.350247   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:20.350255   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:20.350262   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:20.354049   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:20.708688   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:20.708713   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:20.708721   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:20.708727   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:20.715110   94171 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0420 00:17:20.750373   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:20.750397   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:20.750410   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:20.750416   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:20.753967   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:21.208038   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:21.208068   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:21.208105   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:21.208135   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:21.211721   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:21.212964   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:21.212979   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:21.212983   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:21.212986   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:21.215881   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:21.216521   94171 pod_ready.go:102] pod "etcd-ha-371738-m03" in "kube-system" namespace has status "Ready":"False"
	I0420 00:17:21.708713   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:21.708733   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:21.708740   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:21.708744   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:21.712073   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:21.713441   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:21.713464   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:21.713472   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:21.713480   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:21.719392   94171 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0420 00:17:22.208172   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:22.208193   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:22.208201   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:22.208204   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:22.211830   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:22.212735   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:22.212753   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:22.212762   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:22.212766   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:22.215999   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:22.708600   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:22.708634   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:22.708652   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:22.708659   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:22.713963   94171 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0420 00:17:22.715790   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:22.715811   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:22.715821   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:22.715825   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:22.721370   94171 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0420 00:17:23.207970   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:23.207991   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:23.207999   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:23.208004   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:23.211671   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:23.212938   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:23.212954   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:23.212962   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:23.212965   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:23.216279   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:23.217044   94171 pod_ready.go:102] pod "etcd-ha-371738-m03" in "kube-system" namespace has status "Ready":"False"
	I0420 00:17:23.708098   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:23.708121   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:23.708129   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:23.708134   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:23.711492   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:23.712329   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:23.712349   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:23.712356   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:23.712361   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:23.715089   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:24.208450   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:24.208474   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:24.208482   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:24.208486   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:24.211820   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:24.212843   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:24.212864   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:24.212875   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:24.212883   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:24.216198   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:24.708417   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:24.708439   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:24.708446   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:24.708451   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:24.712001   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:24.712698   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:24.712716   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:24.712723   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:24.712729   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:24.716272   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:25.208150   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:25.208173   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:25.208181   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:25.208185   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:25.211935   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:25.213021   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:25.213038   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:25.213045   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:25.213049   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:25.216298   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:25.708629   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:25.708656   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:25.708665   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:25.708670   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:25.712736   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:25.713787   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:25.713805   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:25.713814   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:25.713821   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:25.717454   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:25.718249   94171 pod_ready.go:102] pod "etcd-ha-371738-m03" in "kube-system" namespace has status "Ready":"False"
	I0420 00:17:26.208667   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:26.208695   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:26.208709   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:26.208716   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:26.212163   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:26.213042   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:26.213057   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:26.213064   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:26.213071   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:26.216194   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:26.707968   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:26.707991   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:26.707999   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:26.708004   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:26.711795   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:26.712642   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:26.712657   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:26.712664   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:26.712669   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:26.715623   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:27.208208   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:27.208231   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:27.208240   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:27.208245   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:27.211874   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:27.212747   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:27.212762   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:27.212768   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:27.212773   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:27.215496   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:27.707826   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:27.707847   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:27.707854   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:27.707858   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:27.711608   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:27.712623   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:27.712652   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:27.712660   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:27.712664   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:27.715600   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:28.208585   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:28.208606   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:28.208613   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:28.208617   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:28.212157   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:28.213175   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:28.213190   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:28.213197   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:28.213212   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:28.216271   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:28.216968   94171 pod_ready.go:102] pod "etcd-ha-371738-m03" in "kube-system" namespace has status "Ready":"False"
	I0420 00:17:28.708114   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:28.708143   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:28.708152   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:28.708156   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:28.711583   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:28.712737   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:28.712753   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:28.712762   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:28.712766   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:28.715715   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:29.208753   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:29.208789   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.208798   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.208803   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.212154   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:29.213356   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:29.213372   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.213379   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.213383   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.216508   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:29.707841   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:29.707868   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.707879   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.707886   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.711675   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:29.712601   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:29.712620   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.712629   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.712635   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.716002   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:29.716789   94171 pod_ready.go:92] pod "etcd-ha-371738-m03" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:29.716808   94171 pod_ready.go:81] duration metric: took 10.509234817s for pod "etcd-ha-371738-m03" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.716830   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.716895   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738
	I0420 00:17:29.716905   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.716915   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.716920   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.719594   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:29.720495   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:29.720511   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.720517   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.720521   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.723175   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:29.723841   94171 pod_ready.go:92] pod "kube-apiserver-ha-371738" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:29.723864   94171 pod_ready.go:81] duration metric: took 7.024745ms for pod "kube-apiserver-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.723876   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.723940   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738-m02
	I0420 00:17:29.723952   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.723960   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.723967   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.726704   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:29.727342   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:17:29.727362   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.727373   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.727378   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.729785   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:29.730332   94171 pod_ready.go:92] pod "kube-apiserver-ha-371738-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:29.730352   94171 pod_ready.go:81] duration metric: took 6.468527ms for pod "kube-apiserver-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.730362   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-371738-m03" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.730425   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738-m03
	I0420 00:17:29.730436   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.730446   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.730451   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.733047   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:29.733781   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:29.733801   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.733811   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.733818   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.736687   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:29.737846   94171 pod_ready.go:92] pod "kube-apiserver-ha-371738-m03" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:29.737867   94171 pod_ready.go:81] duration metric: took 7.496633ms for pod "kube-apiserver-ha-371738-m03" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.737879   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.737936   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-371738
	I0420 00:17:29.737947   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.737957   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.737964   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.741179   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:29.741855   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:29.741873   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.741884   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.741893   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.749571   94171 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0420 00:17:29.750097   94171 pod_ready.go:92] pod "kube-controller-manager-ha-371738" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:29.750121   94171 pod_ready.go:81] duration metric: took 12.234318ms for pod "kube-controller-manager-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.750133   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.908463   94171 request.go:629] Waited for 158.24934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-371738-m02
	I0420 00:17:29.908528   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-371738-m02
	I0420 00:17:29.908533   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.908541   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.908545   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.912055   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:30.108341   94171 request.go:629] Waited for 195.364227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:17:30.108405   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:17:30.108411   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:30.108422   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:30.108437   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:30.112635   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:30.113203   94171 pod_ready.go:92] pod "kube-controller-manager-ha-371738-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:30.113221   94171 pod_ready.go:81] duration metric: took 363.080361ms for pod "kube-controller-manager-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:30.113231   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-371738-m03" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:30.308591   94171 request.go:629] Waited for 195.287776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-371738-m03
	I0420 00:17:30.308657   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-371738-m03
	I0420 00:17:30.308662   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:30.308671   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:30.308678   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:30.312580   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:30.508194   94171 request.go:629] Waited for 194.465635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:30.508271   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:30.508282   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:30.508293   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:30.508306   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:30.511919   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:30.512809   94171 pod_ready.go:92] pod "kube-controller-manager-ha-371738-m03" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:30.512828   94171 pod_ready.go:81] duration metric: took 399.588508ms for pod "kube-controller-manager-ha-371738-m03" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:30.512838   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-59wls" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:30.707874   94171 request.go:629] Waited for 194.956694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-59wls
	I0420 00:17:30.707942   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-59wls
	I0420 00:17:30.707948   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:30.707957   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:30.707963   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:30.712104   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:30.908591   94171 request.go:629] Waited for 195.384985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:17:30.908692   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:17:30.908706   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:30.908719   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:30.908725   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:30.913248   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:30.914095   94171 pod_ready.go:92] pod "kube-proxy-59wls" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:30.914119   94171 pod_ready.go:81] duration metric: took 401.273767ms for pod "kube-proxy-59wls" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:30.914133   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-924z9" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:31.108117   94171 request.go:629] Waited for 193.908699ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-924z9
	I0420 00:17:31.108188   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-924z9
	I0420 00:17:31.108194   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:31.108202   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:31.108206   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:31.112357   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:31.308918   94171 request.go:629] Waited for 195.365354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:31.308977   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:31.308991   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:31.309002   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:31.309010   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:31.312910   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:31.313955   94171 pod_ready.go:92] pod "kube-proxy-924z9" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:31.313973   94171 pod_ready.go:81] duration metric: took 399.833418ms for pod "kube-proxy-924z9" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:31.313982   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zw62l" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:31.508828   94171 request.go:629] Waited for 194.78105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw62l
	I0420 00:17:31.508938   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw62l
	I0420 00:17:31.508956   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:31.508965   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:31.508969   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:31.512828   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:31.708208   94171 request.go:629] Waited for 194.380563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:31.708298   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:31.708306   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:31.708320   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:31.708331   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:31.711388   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:31.712271   94171 pod_ready.go:92] pod "kube-proxy-zw62l" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:31.712288   94171 pod_ready.go:81] duration metric: took 398.299702ms for pod "kube-proxy-zw62l" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:31.712298   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:31.908385   94171 request.go:629] Waited for 196.005489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738
	I0420 00:17:31.908457   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738
	I0420 00:17:31.908464   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:31.908472   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:31.908480   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:31.912558   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:32.108639   94171 request.go:629] Waited for 195.372084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:32.108739   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:32.108753   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:32.108761   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:32.108767   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:32.113520   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:32.114965   94171 pod_ready.go:92] pod "kube-scheduler-ha-371738" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:32.114989   94171 pod_ready.go:81] duration metric: took 402.683186ms for pod "kube-scheduler-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:32.115002   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:32.308133   94171 request.go:629] Waited for 193.010716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738-m02
	I0420 00:17:32.308189   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738-m02
	I0420 00:17:32.308194   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:32.308204   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:32.308215   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:32.311763   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:32.508893   94171 request.go:629] Waited for 196.361088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:17:32.508966   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:17:32.508977   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:32.508986   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:32.508992   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:32.512382   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:32.512980   94171 pod_ready.go:92] pod "kube-scheduler-ha-371738-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:32.512998   94171 pod_ready.go:81] duration metric: took 397.989059ms for pod "kube-scheduler-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:32.513007   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-371738-m03" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:32.708181   94171 request.go:629] Waited for 195.082136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738-m03
	I0420 00:17:32.708242   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738-m03
	I0420 00:17:32.708247   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:32.708254   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:32.708259   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:32.712019   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:32.908239   94171 request.go:629] Waited for 195.354874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:32.908328   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:32.908341   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:32.908351   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:32.908359   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:32.911774   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:32.912491   94171 pod_ready.go:92] pod "kube-scheduler-ha-371738-m03" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:32.912513   94171 pod_ready.go:81] duration metric: took 399.498356ms for pod "kube-scheduler-ha-371738-m03" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:32.912528   94171 pod_ready.go:38] duration metric: took 13.758290828s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 00:17:32.912549   94171 api_server.go:52] waiting for apiserver process to appear ...
	I0420 00:17:32.912615   94171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:17:32.931149   94171 api_server.go:72] duration metric: took 18.135251217s to wait for apiserver process to appear ...
	I0420 00:17:32.931170   94171 api_server.go:88] waiting for apiserver healthz status ...
	I0420 00:17:32.931190   94171 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0420 00:17:32.937852   94171 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0420 00:17:32.937924   94171 round_trippers.go:463] GET https://192.168.39.217:8443/version
	I0420 00:17:32.937937   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:32.937945   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:32.937949   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:32.938889   94171 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0420 00:17:32.938961   94171 api_server.go:141] control plane version: v1.30.0
	I0420 00:17:32.938980   94171 api_server.go:131] duration metric: took 7.802392ms to wait for apiserver health ...
	I0420 00:17:32.938994   94171 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 00:17:33.108421   94171 request.go:629] Waited for 169.340457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:17:33.108480   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:17:33.108485   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:33.108493   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:33.108498   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:33.115314   94171 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0420 00:17:33.121462   94171 system_pods.go:59] 24 kube-system pods found
	I0420 00:17:33.121487   94171 system_pods.go:61] "coredns-7db6d8ff4d-9hc82" [279d40d8-eb21-476c-ba36-bc7592777126] Running
	I0420 00:17:33.121491   94171 system_pods.go:61] "coredns-7db6d8ff4d-jvvpr" [104d5328-1f6a-4747-8e26-9a98e38dc1cc] Running
	I0420 00:17:33.121494   94171 system_pods.go:61] "etcd-ha-371738" [5e23c4a0-7c15-47b9-b722-82e61a10f286] Running
	I0420 00:17:33.121498   94171 system_pods.go:61] "etcd-ha-371738-m02" [712e8a6e-7007-4cf1-8a0c-4e33eeccebcd] Running
	I0420 00:17:33.121500   94171 system_pods.go:61] "etcd-ha-371738-m03" [089407cc-414e-479f-8522-f19327068d36] Running
	I0420 00:17:33.121505   94171 system_pods.go:61] "kindnet-ggw7f" [2e0d1c1a-6fb4-4c3e-ae2b-41cfccaba2dd] Running
	I0420 00:17:33.121508   94171 system_pods.go:61] "kindnet-ph4sb" [d0786a22-e08e-4924-93b1-d8f3f34c9da7] Running
	I0420 00:17:33.121510   94171 system_pods.go:61] "kindnet-s87k2" [0820561f-f794-4ac5-8ce2-ae0cb4310c3e] Running
	I0420 00:17:33.121514   94171 system_pods.go:61] "kube-apiserver-ha-371738" [301ce02b-37b1-42ba-8a45-fbde327e2a02] Running
	I0420 00:17:33.121517   94171 system_pods.go:61] "kube-apiserver-ha-371738-m02" [a22f017a-e7b0-4748-9486-b52d35284584] Running
	I0420 00:17:33.121520   94171 system_pods.go:61] "kube-apiserver-ha-371738-m03" [5a627f3c-199a-4a3f-9940-2e7e1d73321d] Running
	I0420 00:17:33.121524   94171 system_pods.go:61] "kube-controller-manager-ha-371738" [bc03ed79-b024-46b1-af13-45a3def8bcae] Running
	I0420 00:17:33.121527   94171 system_pods.go:61] "kube-controller-manager-ha-371738-m02" [7b460bfb-bddf-46c0-a30c-f5e9757a32ad] Running
	I0420 00:17:33.121531   94171 system_pods.go:61] "kube-controller-manager-ha-371738-m03" [2f7bc375-ad5a-4ff1-93b4-3166d4b92c35] Running
	I0420 00:17:33.121535   94171 system_pods.go:61] "kube-proxy-59wls" [722c6b7d-109b-4201-a5f1-c02a65befcf2] Running
	I0420 00:17:33.121538   94171 system_pods.go:61] "kube-proxy-924z9" [87034485-00d8-4a57-949d-2e894dd08ce4] Running
	I0420 00:17:33.121541   94171 system_pods.go:61] "kube-proxy-zw62l" [dad72bfc-65c2-4007-9d5c-682ddf48c44d] Running
	I0420 00:17:33.121544   94171 system_pods.go:61] "kube-scheduler-ha-371738" [a3df56d3-c437-4ea9-b73d-2b22e93334b3] Running
	I0420 00:17:33.121547   94171 system_pods.go:61] "kube-scheduler-ha-371738-m02" [47dba6e4-cb4d-43e8-a173-06d13b08fd55] Running
	I0420 00:17:33.121553   94171 system_pods.go:61] "kube-scheduler-ha-371738-m03" [35e43bbb-1e3f-44cf-846b-3b1bcd08a468] Running
	I0420 00:17:33.121558   94171 system_pods.go:61] "kube-vip-ha-371738" [8d162382-25bb-4393-8c45-a8487b571605] Running
	I0420 00:17:33.121564   94171 system_pods.go:61] "kube-vip-ha-371738-m02" [76331738-5bca-4724-939e-4c16a906e65b] Running
	I0420 00:17:33.121572   94171 system_pods.go:61] "kube-vip-ha-371738-m03" [c09364c4-d879-49fb-a719-e9c06301a4bc] Running
	I0420 00:17:33.121577   94171 system_pods.go:61] "storage-provisioner" [1d7b89d3-7cff-4258-8215-819971fa1b81] Running
	I0420 00:17:33.121585   94171 system_pods.go:74] duration metric: took 182.580911ms to wait for pod list to return data ...
	I0420 00:17:33.121595   94171 default_sa.go:34] waiting for default service account to be created ...
	I0420 00:17:33.308803   94171 request.go:629] Waited for 187.118905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0420 00:17:33.308874   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0420 00:17:33.308880   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:33.308888   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:33.308892   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:33.312432   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:33.312690   94171 default_sa.go:45] found service account: "default"
	I0420 00:17:33.312710   94171 default_sa.go:55] duration metric: took 191.106105ms for default service account to be created ...
	I0420 00:17:33.312717   94171 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 00:17:33.508470   94171 request.go:629] Waited for 195.677884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:17:33.508532   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:17:33.508537   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:33.508545   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:33.508555   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:33.518190   94171 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0420 00:17:33.524807   94171 system_pods.go:86] 24 kube-system pods found
	I0420 00:17:33.524833   94171 system_pods.go:89] "coredns-7db6d8ff4d-9hc82" [279d40d8-eb21-476c-ba36-bc7592777126] Running
	I0420 00:17:33.524840   94171 system_pods.go:89] "coredns-7db6d8ff4d-jvvpr" [104d5328-1f6a-4747-8e26-9a98e38dc1cc] Running
	I0420 00:17:33.524845   94171 system_pods.go:89] "etcd-ha-371738" [5e23c4a0-7c15-47b9-b722-82e61a10f286] Running
	I0420 00:17:33.524849   94171 system_pods.go:89] "etcd-ha-371738-m02" [712e8a6e-7007-4cf1-8a0c-4e33eeccebcd] Running
	I0420 00:17:33.524853   94171 system_pods.go:89] "etcd-ha-371738-m03" [089407cc-414e-479f-8522-f19327068d36] Running
	I0420 00:17:33.524856   94171 system_pods.go:89] "kindnet-ggw7f" [2e0d1c1a-6fb4-4c3e-ae2b-41cfccaba2dd] Running
	I0420 00:17:33.524861   94171 system_pods.go:89] "kindnet-ph4sb" [d0786a22-e08e-4924-93b1-d8f3f34c9da7] Running
	I0420 00:17:33.524864   94171 system_pods.go:89] "kindnet-s87k2" [0820561f-f794-4ac5-8ce2-ae0cb4310c3e] Running
	I0420 00:17:33.524869   94171 system_pods.go:89] "kube-apiserver-ha-371738" [301ce02b-37b1-42ba-8a45-fbde327e2a02] Running
	I0420 00:17:33.524873   94171 system_pods.go:89] "kube-apiserver-ha-371738-m02" [a22f017a-e7b0-4748-9486-b52d35284584] Running
	I0420 00:17:33.524878   94171 system_pods.go:89] "kube-apiserver-ha-371738-m03" [5a627f3c-199a-4a3f-9940-2e7e1d73321d] Running
	I0420 00:17:33.524885   94171 system_pods.go:89] "kube-controller-manager-ha-371738" [bc03ed79-b024-46b1-af13-45a3def8bcae] Running
	I0420 00:17:33.524890   94171 system_pods.go:89] "kube-controller-manager-ha-371738-m02" [7b460bfb-bddf-46c0-a30c-f5e9757a32ad] Running
	I0420 00:17:33.524896   94171 system_pods.go:89] "kube-controller-manager-ha-371738-m03" [2f7bc375-ad5a-4ff1-93b4-3166d4b92c35] Running
	I0420 00:17:33.524901   94171 system_pods.go:89] "kube-proxy-59wls" [722c6b7d-109b-4201-a5f1-c02a65befcf2] Running
	I0420 00:17:33.524910   94171 system_pods.go:89] "kube-proxy-924z9" [87034485-00d8-4a57-949d-2e894dd08ce4] Running
	I0420 00:17:33.524917   94171 system_pods.go:89] "kube-proxy-zw62l" [dad72bfc-65c2-4007-9d5c-682ddf48c44d] Running
	I0420 00:17:33.524921   94171 system_pods.go:89] "kube-scheduler-ha-371738" [a3df56d3-c437-4ea9-b73d-2b22e93334b3] Running
	I0420 00:17:33.524927   94171 system_pods.go:89] "kube-scheduler-ha-371738-m02" [47dba6e4-cb4d-43e8-a173-06d13b08fd55] Running
	I0420 00:17:33.524932   94171 system_pods.go:89] "kube-scheduler-ha-371738-m03" [35e43bbb-1e3f-44cf-846b-3b1bcd08a468] Running
	I0420 00:17:33.524938   94171 system_pods.go:89] "kube-vip-ha-371738" [8d162382-25bb-4393-8c45-a8487b571605] Running
	I0420 00:17:33.524942   94171 system_pods.go:89] "kube-vip-ha-371738-m02" [76331738-5bca-4724-939e-4c16a906e65b] Running
	I0420 00:17:33.524948   94171 system_pods.go:89] "kube-vip-ha-371738-m03" [c09364c4-d879-49fb-a719-e9c06301a4bc] Running
	I0420 00:17:33.524951   94171 system_pods.go:89] "storage-provisioner" [1d7b89d3-7cff-4258-8215-819971fa1b81] Running
	I0420 00:17:33.524961   94171 system_pods.go:126] duration metric: took 212.238163ms to wait for k8s-apps to be running ...
	I0420 00:17:33.524969   94171 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 00:17:33.525015   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:17:33.544738   94171 system_svc.go:56] duration metric: took 19.760068ms WaitForService to wait for kubelet
	I0420 00:17:33.544768   94171 kubeadm.go:576] duration metric: took 18.748916318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:17:33.544790   94171 node_conditions.go:102] verifying NodePressure condition ...
	I0420 00:17:33.708442   94171 request.go:629] Waited for 163.564735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
	I0420 00:17:33.708536   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0420 00:17:33.708549   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:33.708559   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:33.708565   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:33.712292   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:33.713845   94171 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 00:17:33.713869   94171 node_conditions.go:123] node cpu capacity is 2
	I0420 00:17:33.713881   94171 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 00:17:33.713884   94171 node_conditions.go:123] node cpu capacity is 2
	I0420 00:17:33.713887   94171 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 00:17:33.713891   94171 node_conditions.go:123] node cpu capacity is 2
	I0420 00:17:33.713895   94171 node_conditions.go:105] duration metric: took 169.098844ms to run NodePressure ...
	I0420 00:17:33.713907   94171 start.go:240] waiting for startup goroutines ...
	I0420 00:17:33.713931   94171 start.go:254] writing updated cluster config ...
	I0420 00:17:33.714201   94171 ssh_runner.go:195] Run: rm -f paused
	I0420 00:17:33.766160   94171 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 00:17:33.768271   94171 out.go:177] * Done! kubectl is now configured to use "ha-371738" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.199750202Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713572464199726804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21bc9620-e353-498a-8002-5011714e4a47 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.200464253Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=45c163c5-bada-4012-bf2b-296ef2f3d8cd name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.200540997Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=45c163c5-bada-4012-bf2b-296ef2f3d8cd name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.200900103Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee362bb57c39b48e30473ee01be65a12508f89000c04664e9d4cb00eead48881,PodSandboxId:2952502d79ed7046fb6c936e2cdcaac06d274a1af6bb0f72625bb9c7849a53af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713572256398169441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91975a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf,PodSandboxId:96b6f46faf7987503503c406f518a352cf828470aaa2857fdc4e9580eee7d3ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572112401733564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c,PodSandboxId:6951735c94141fbea313e44ff72fab10529f03b1ba6dc664543c35ed8b0e7c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572112310336552,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0f5c9dcace63e2dc86b034dc66a0a660764b45a0999a972dea4c7c8cd62d11e,PodSandboxId:01cb2806eed5909650fa3a5bbb88b004584ddd9d24eee13df6af3949638dac25,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1713572110734518387,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13bd67903cc5e0f74278eaddc236e4597d725fc89a163319ccc5ffa57716c6b,PodSandboxId:e0cd9f38c95d64e8716ff5be77be15b480d34445a3a35c3e35d5cc2bb3e044a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17135721
08941915488,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernetes.container.hash: dd367de8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c,PodSandboxId:78d8eb3f68b710cf8ae3ebc45873b48e07019b5e4d7efd0b56e62a4513be110c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713572108700737032,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245ebfdbdadb6145c9104ae5b268ed54335723a4402d44a9f283dca41c61dbf2,PodSandboxId:e3275bdf3889ebe8780f0f686229b8a81cda9dd7ac84f9a1b3e19cf39eab89b1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713572088929801835,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f78293c6a4434108e95d95ceaf01fb5d,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0,PodSandboxId:6b52fa6b93c1b7e8f8537088635da6d0cb7b5bb9091002379c8f7b848af01e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713572087052250691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f163f250149afb625b34cc67c2a85b657a6c38717a194973b7406caf8b71afdb,PodSandboxId:955da581ce36468153b6418af5b2fbdf608b8744b4c56479853fdcd91e690225,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713572087016669323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9,PodSandboxId:6c0d855406f87897ca0924505087fcfdf3cb0d5eaf2fcde6c237b42f6d3ffd82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713572086953744333,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cd3108e73ec5bd7f90cb4fd3f619ba5cc28c85b3d9801577acddf5ec223370,PodSandboxId:19081241153ea0333e203fc33b13da47c76fa5bce9ccea62ac30f45b1c588e03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713572086899239600,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotations:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=45c163c5-bada-4012-bf2b-296ef2f3d8cd name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.246346424Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d2b534b-0453-4532-a1c2-7992300d5bdc name=/runtime.v1.RuntimeService/Version
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.246456598Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d2b534b-0453-4532-a1c2-7992300d5bdc name=/runtime.v1.RuntimeService/Version
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.249799173Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f4c32e5-b415-44d8-93cf-269568cadc24 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.250460463Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713572464250432682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f4c32e5-b415-44d8-93cf-269568cadc24 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.257521476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97bed3fe-1aa2-495c-903d-de8f6a094639 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.257599873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97bed3fe-1aa2-495c-903d-de8f6a094639 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.257821111Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee362bb57c39b48e30473ee01be65a12508f89000c04664e9d4cb00eead48881,PodSandboxId:2952502d79ed7046fb6c936e2cdcaac06d274a1af6bb0f72625bb9c7849a53af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713572256398169441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91975a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf,PodSandboxId:96b6f46faf7987503503c406f518a352cf828470aaa2857fdc4e9580eee7d3ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572112401733564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c,PodSandboxId:6951735c94141fbea313e44ff72fab10529f03b1ba6dc664543c35ed8b0e7c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572112310336552,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0f5c9dcace63e2dc86b034dc66a0a660764b45a0999a972dea4c7c8cd62d11e,PodSandboxId:01cb2806eed5909650fa3a5bbb88b004584ddd9d24eee13df6af3949638dac25,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1713572110734518387,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13bd67903cc5e0f74278eaddc236e4597d725fc89a163319ccc5ffa57716c6b,PodSandboxId:e0cd9f38c95d64e8716ff5be77be15b480d34445a3a35c3e35d5cc2bb3e044a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17135721
08941915488,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernetes.container.hash: dd367de8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c,PodSandboxId:78d8eb3f68b710cf8ae3ebc45873b48e07019b5e4d7efd0b56e62a4513be110c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713572108700737032,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245ebfdbdadb6145c9104ae5b268ed54335723a4402d44a9f283dca41c61dbf2,PodSandboxId:e3275bdf3889ebe8780f0f686229b8a81cda9dd7ac84f9a1b3e19cf39eab89b1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713572088929801835,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f78293c6a4434108e95d95ceaf01fb5d,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0,PodSandboxId:6b52fa6b93c1b7e8f8537088635da6d0cb7b5bb9091002379c8f7b848af01e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713572087052250691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f163f250149afb625b34cc67c2a85b657a6c38717a194973b7406caf8b71afdb,PodSandboxId:955da581ce36468153b6418af5b2fbdf608b8744b4c56479853fdcd91e690225,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713572087016669323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9,PodSandboxId:6c0d855406f87897ca0924505087fcfdf3cb0d5eaf2fcde6c237b42f6d3ffd82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713572086953744333,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cd3108e73ec5bd7f90cb4fd3f619ba5cc28c85b3d9801577acddf5ec223370,PodSandboxId:19081241153ea0333e203fc33b13da47c76fa5bce9ccea62ac30f45b1c588e03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713572086899239600,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotations:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97bed3fe-1aa2-495c-903d-de8f6a094639 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.299567053Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5a1cc395-a14c-43db-b45e-02e8c5890815 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.299644400Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5a1cc395-a14c-43db-b45e-02e8c5890815 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.300655625Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e0c9e42-b864-4ce5-a29a-07cf6903ded2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.301044147Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713572464301025006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e0c9e42-b864-4ce5-a29a-07cf6903ded2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.301748796Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e665c31-f5d7-4756-b447-01f2a62fdcb6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.301827165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e665c31-f5d7-4756-b447-01f2a62fdcb6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.302060595Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee362bb57c39b48e30473ee01be65a12508f89000c04664e9d4cb00eead48881,PodSandboxId:2952502d79ed7046fb6c936e2cdcaac06d274a1af6bb0f72625bb9c7849a53af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713572256398169441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91975a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf,PodSandboxId:96b6f46faf7987503503c406f518a352cf828470aaa2857fdc4e9580eee7d3ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572112401733564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c,PodSandboxId:6951735c94141fbea313e44ff72fab10529f03b1ba6dc664543c35ed8b0e7c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572112310336552,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0f5c9dcace63e2dc86b034dc66a0a660764b45a0999a972dea4c7c8cd62d11e,PodSandboxId:01cb2806eed5909650fa3a5bbb88b004584ddd9d24eee13df6af3949638dac25,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1713572110734518387,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13bd67903cc5e0f74278eaddc236e4597d725fc89a163319ccc5ffa57716c6b,PodSandboxId:e0cd9f38c95d64e8716ff5be77be15b480d34445a3a35c3e35d5cc2bb3e044a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17135721
08941915488,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernetes.container.hash: dd367de8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c,PodSandboxId:78d8eb3f68b710cf8ae3ebc45873b48e07019b5e4d7efd0b56e62a4513be110c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713572108700737032,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245ebfdbdadb6145c9104ae5b268ed54335723a4402d44a9f283dca41c61dbf2,PodSandboxId:e3275bdf3889ebe8780f0f686229b8a81cda9dd7ac84f9a1b3e19cf39eab89b1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713572088929801835,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f78293c6a4434108e95d95ceaf01fb5d,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0,PodSandboxId:6b52fa6b93c1b7e8f8537088635da6d0cb7b5bb9091002379c8f7b848af01e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713572087052250691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f163f250149afb625b34cc67c2a85b657a6c38717a194973b7406caf8b71afdb,PodSandboxId:955da581ce36468153b6418af5b2fbdf608b8744b4c56479853fdcd91e690225,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713572087016669323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9,PodSandboxId:6c0d855406f87897ca0924505087fcfdf3cb0d5eaf2fcde6c237b42f6d3ffd82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713572086953744333,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cd3108e73ec5bd7f90cb4fd3f619ba5cc28c85b3d9801577acddf5ec223370,PodSandboxId:19081241153ea0333e203fc33b13da47c76fa5bce9ccea62ac30f45b1c588e03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713572086899239600,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotations:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e665c31-f5d7-4756-b447-01f2a62fdcb6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.354775961Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f961810-1652-417b-8dcf-a2b5794a5c26 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.354880575Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f961810-1652-417b-8dcf-a2b5794a5c26 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.356035209Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=940fde80-a289-4785-a69d-c44df8414224 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.356548076Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713572464356525948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=940fde80-a289-4785-a69d-c44df8414224 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.357346914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8485b49-2ca6-4cbf-8d83-6670bb01e29d name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.357426777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8485b49-2ca6-4cbf-8d83-6670bb01e29d name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:21:04 ha-371738 crio[682]: time="2024-04-20 00:21:04.357684875Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee362bb57c39b48e30473ee01be65a12508f89000c04664e9d4cb00eead48881,PodSandboxId:2952502d79ed7046fb6c936e2cdcaac06d274a1af6bb0f72625bb9c7849a53af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713572256398169441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91975a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf,PodSandboxId:96b6f46faf7987503503c406f518a352cf828470aaa2857fdc4e9580eee7d3ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572112401733564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c,PodSandboxId:6951735c94141fbea313e44ff72fab10529f03b1ba6dc664543c35ed8b0e7c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572112310336552,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0f5c9dcace63e2dc86b034dc66a0a660764b45a0999a972dea4c7c8cd62d11e,PodSandboxId:01cb2806eed5909650fa3a5bbb88b004584ddd9d24eee13df6af3949638dac25,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1713572110734518387,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13bd67903cc5e0f74278eaddc236e4597d725fc89a163319ccc5ffa57716c6b,PodSandboxId:e0cd9f38c95d64e8716ff5be77be15b480d34445a3a35c3e35d5cc2bb3e044a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17135721
08941915488,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernetes.container.hash: dd367de8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c,PodSandboxId:78d8eb3f68b710cf8ae3ebc45873b48e07019b5e4d7efd0b56e62a4513be110c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713572108700737032,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245ebfdbdadb6145c9104ae5b268ed54335723a4402d44a9f283dca41c61dbf2,PodSandboxId:e3275bdf3889ebe8780f0f686229b8a81cda9dd7ac84f9a1b3e19cf39eab89b1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713572088929801835,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f78293c6a4434108e95d95ceaf01fb5d,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0,PodSandboxId:6b52fa6b93c1b7e8f8537088635da6d0cb7b5bb9091002379c8f7b848af01e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713572087052250691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f163f250149afb625b34cc67c2a85b657a6c38717a194973b7406caf8b71afdb,PodSandboxId:955da581ce36468153b6418af5b2fbdf608b8744b4c56479853fdcd91e690225,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713572087016669323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9,PodSandboxId:6c0d855406f87897ca0924505087fcfdf3cb0d5eaf2fcde6c237b42f6d3ffd82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713572086953744333,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cd3108e73ec5bd7f90cb4fd3f619ba5cc28c85b3d9801577acddf5ec223370,PodSandboxId:19081241153ea0333e203fc33b13da47c76fa5bce9ccea62ac30f45b1c588e03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713572086899239600,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotations:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8485b49-2ca6-4cbf-8d83-6670bb01e29d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ee362bb57c39b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2952502d79ed7       busybox-fc5497c4f-f8cxz
	0895fff8b18b0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   96b6f46faf798       coredns-7db6d8ff4d-9hc82
	a8223d8428849       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   6951735c94141       coredns-7db6d8ff4d-jvvpr
	c0f5c9dcace63       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   01cb2806eed59       storage-provisioner
	b13bd67903cc5       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Running             kindnet-cni               0                   e0cd9f38c95d6       kindnet-s87k2
	484faebf3e657       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      5 minutes ago       Running             kube-proxy                0                   78d8eb3f68b71       kube-proxy-zw62l
	245ebfdbdadb6       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Running             kube-vip                  0                   e3275bdf3889e       kube-vip-ha-371738
	c7bfd34cee24c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   6b52fa6b93c1b       etcd-ha-371738
	f163f250149af       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      6 minutes ago       Running             kube-controller-manager   0                   955da581ce364       kube-controller-manager-ha-371738
	c9112b9048168       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      6 minutes ago       Running             kube-scheduler            0                   6c0d855406f87       kube-scheduler-ha-371738
	c0cd3108e73ec       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      6 minutes ago       Running             kube-apiserver            0                   19081241153ea       kube-apiserver-ha-371738
	
	
	==> coredns [0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf] <==
	[INFO] 10.244.2.2:50506 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205402s
	[INFO] 10.244.2.2:56719 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000233711s
	[INFO] 10.244.2.2:56750 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00021898s
	[INFO] 10.244.2.2:53438 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111524s
	[INFO] 10.244.0.4:40741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104546s
	[INFO] 10.244.0.4:60826 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142804s
	[INFO] 10.244.1.2:55654 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142714s
	[INFO] 10.244.1.2:34889 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117318s
	[INFO] 10.244.1.2:45674 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142204s
	[INFO] 10.244.1.2:43577 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088578s
	[INFO] 10.244.1.2:36740 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123852s
	[INFO] 10.244.1.2:57454 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168492s
	[INFO] 10.244.2.2:49398 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205465s
	[INFO] 10.244.2.2:48930 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221231s
	[INFO] 10.244.2.2:42052 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108139s
	[INFO] 10.244.0.4:40360 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000213257s
	[INFO] 10.244.0.4:54447 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081534s
	[INFO] 10.244.1.2:40715 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185061s
	[INFO] 10.244.1.2:45537 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165941s
	[INFO] 10.244.1.2:38158 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132179s
	[INFO] 10.244.2.2:42970 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000371127s
	[INFO] 10.244.2.2:50230 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000172364s
	[INFO] 10.244.0.4:51459 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000058901s
	[INFO] 10.244.0.4:59988 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131476s
	[INFO] 10.244.1.2:56359 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140553s
	
	
	==> coredns [a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c] <==
	[INFO] 10.244.0.4:51638 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000400782s
	[INFO] 10.244.0.4:50604 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000084188s
	[INFO] 10.244.0.4:36574 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002067722s
	[INFO] 10.244.1.2:39782 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118148s
	[INFO] 10.244.2.2:34556 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00272395s
	[INFO] 10.244.2.2:59691 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134121s
	[INFO] 10.244.0.4:54126 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001623235s
	[INFO] 10.244.0.4:42647 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000295581s
	[INFO] 10.244.0.4:47843 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001168377s
	[INFO] 10.244.0.4:59380 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132829s
	[INFO] 10.244.0.4:59464 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000032892s
	[INFO] 10.244.0.4:52319 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063642s
	[INFO] 10.244.1.2:41188 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001744808s
	[INFO] 10.244.1.2:56595 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001214481s
	[INFO] 10.244.2.2:57639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000180873s
	[INFO] 10.244.0.4:57748 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177324s
	[INFO] 10.244.0.4:49496 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076032s
	[INFO] 10.244.1.2:36655 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131976s
	[INFO] 10.244.2.2:37462 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221492s
	[INFO] 10.244.2.2:58605 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000186595s
	[INFO] 10.244.0.4:34556 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191452s
	[INFO] 10.244.0.4:53073 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000345299s
	[INFO] 10.244.1.2:38241 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000181093s
	[INFO] 10.244.1.2:59304 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000166312s
	[INFO] 10.244.1.2:50151 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139637s
	
	
	==> describe nodes <==
	Name:               ha-371738
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-371738
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-371738
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T00_14_57_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:14:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-371738
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:20:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:18:00 +0000   Sat, 20 Apr 2024 00:14:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:18:00 +0000   Sat, 20 Apr 2024 00:14:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:18:00 +0000   Sat, 20 Apr 2024 00:14:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:18:00 +0000   Sat, 20 Apr 2024 00:15:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-371738
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 74609fff13e94a48ba74bd0fc50a4818
	  System UUID:                74609fff-13e9-4a48-ba74-bd0fc50a4818
	  Boot ID:                    2adb72ca-aae0-452d-9d86-779c19923b8a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-f8cxz              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 coredns-7db6d8ff4d-9hc82             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m57s
	  kube-system                 coredns-7db6d8ff4d-jvvpr             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m57s
	  kube-system                 etcd-ha-371738                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m8s
	  kube-system                 kindnet-s87k2                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m57s
	  kube-system                 kube-apiserver-ha-371738             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-controller-manager-ha-371738    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-proxy-zw62l                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 kube-scheduler-ha-371738             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-vip-ha-371738                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m55s  kube-proxy       
	  Normal  Starting                 6m9s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m8s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m8s   kubelet          Node ha-371738 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s   kubelet          Node ha-371738 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m8s   kubelet          Node ha-371738 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m58s  node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	  Normal  NodeReady                5m54s  kubelet          Node ha-371738 status is now: NodeReady
	  Normal  RegisteredNode           4m47s  node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	  Normal  RegisteredNode           3m34s  node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	
	
	Name:               ha-371738-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-371738-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-371738
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_20T00_16_02_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:15:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-371738-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:18:43 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 20 Apr 2024 00:18:02 +0000   Sat, 20 Apr 2024 00:19:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 20 Apr 2024 00:18:02 +0000   Sat, 20 Apr 2024 00:19:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 20 Apr 2024 00:18:02 +0000   Sat, 20 Apr 2024 00:19:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 20 Apr 2024 00:18:02 +0000   Sat, 20 Apr 2024 00:19:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-371738-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e23e7a13fe24abd8986bea706ca80e3
	  System UUID:                4e23e7a1-3fe2-4abd-8986-bea706ca80e3
	  Boot ID:                    68a6a936-bedb-4253-bc9f-1d7fe3f3747e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-j7g5h                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 etcd-ha-371738-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m3s
	  kube-system                 kindnet-ggw7f                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m5s
	  kube-system                 kube-apiserver-ha-371738-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-controller-manager-ha-371738-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-proxy-59wls                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-scheduler-ha-371738-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-vip-ha-371738-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m5s (x8 over 5m5s)  kubelet          Node ha-371738-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s (x8 over 5m5s)  kubelet          Node ha-371738-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s (x7 over 5m5s)  kubelet          Node ha-371738-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m3s                 node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	  Normal  RegisteredNode           4m47s                node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	  Normal  RegisteredNode           3m34s                node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	  Normal  NodeNotReady             99s                  node-controller  Node ha-371738-m02 status is now: NodeNotReady
	
	
	Name:               ha-371738-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-371738-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-371738
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_20T00_17_14_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:17:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-371738-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:20:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:17:41 +0000   Sat, 20 Apr 2024 00:17:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:17:41 +0000   Sat, 20 Apr 2024 00:17:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:17:41 +0000   Sat, 20 Apr 2024 00:17:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:17:41 +0000   Sat, 20 Apr 2024 00:17:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-371738-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0917a4381b82461ea5ea3ad6015706e2
	  System UUID:                0917a438-1b82-461e-a5ea-3ad6015706e2
	  Boot ID:                    1e10e32a-0de9-4140-bd97-ed1fd3351685
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bqndp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 etcd-ha-371738-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m52s
	  kube-system                 kindnet-ph4sb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m54s
	  kube-system                 kube-apiserver-ha-371738-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-controller-manager-ha-371738-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-proxy-924z9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-scheduler-ha-371738-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-vip-ha-371738-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m54s (x8 over 3m54s)  kubelet          Node ha-371738-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x8 over 3m54s)  kubelet          Node ha-371738-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x7 over 3m54s)  kubelet          Node ha-371738-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-371738-m03 event: Registered Node ha-371738-m03 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-371738-m03 event: Registered Node ha-371738-m03 in Controller
	  Normal  RegisteredNode           3m34s                  node-controller  Node ha-371738-m03 event: Registered Node ha-371738-m03 in Controller
	
	
	Name:               ha-371738-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-371738-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-371738
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_20T00_18_15_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:18:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-371738-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:20:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:18:45 +0000   Sat, 20 Apr 2024 00:18:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:18:45 +0000   Sat, 20 Apr 2024 00:18:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:18:45 +0000   Sat, 20 Apr 2024 00:18:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:18:45 +0000   Sat, 20 Apr 2024 00:18:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    ha-371738-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 236dccfc477f4e3db2ca80077dc2160d
	  System UUID:                236dccfc-477f-4e3d-b2ca-80077dc2160d
	  Boot ID:                    7be0dc95-b84d-4dfb-9a83-50a5c6778683
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-zsn9n       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m50s
	  kube-system                 kube-proxy-7fn2b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m50s (x4 over 2m51s)  kubelet          Node ha-371738-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m50s (x4 over 2m51s)  kubelet          Node ha-371738-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m50s (x4 over 2m51s)  kubelet          Node ha-371738-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal  RegisteredNode           2m48s                  node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal  RegisteredNode           2m46s                  node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-371738-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr20 00:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052238] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043570] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.622139] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.586855] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.718707] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.470452] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.056643] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066813] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.173842] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.129751] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.277871] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.788058] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.061136] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.194311] systemd-fstab-generator[953]: Ignoring "noauto" option for root device
	[  +1.186377] kauditd_printk_skb: 57 callbacks suppressed
	[  +8.916528] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[  +0.094324] kauditd_printk_skb: 40 callbacks suppressed
	[Apr20 00:15] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.572775] kauditd_printk_skb: 72 callbacks suppressed
	
	
	==> etcd [c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0] <==
	{"level":"warn","ts":"2024-04-20T00:21:04.575026Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"bced3148e0d07545","rtt":"1.257299ms","error":"dial tcp 192.168.39.48:2380: i/o timeout"}
	{"level":"warn","ts":"2024-04-20T00:21:04.575313Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"bced3148e0d07545","rtt":"11.549918ms","error":"dial tcp 192.168.39.48:2380: i/o timeout"}
	{"level":"warn","ts":"2024-04-20T00:21:04.643988Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.661289Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.668264Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.685956Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.695054Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.702992Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.707014Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.710531Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.719632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.726297Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.732721Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.736313Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.740281Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.740524Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.749313Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.755654Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.762169Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.766178Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.769836Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.775885Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.794371Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.805314Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:21:04.839986Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:21:04 up 6 min,  0 users,  load average: 0.50, 0.33, 0.16
	Linux ha-371738 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b13bd67903cc5e0f74278eaddc236e4597d725fc89a163319ccc5ffa57716c6b] <==
	I0420 00:20:30.378958       1 main.go:250] Node ha-371738-m04 has CIDR [10.244.3.0/24] 
	I0420 00:20:40.388648       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0420 00:20:40.388700       1 main.go:227] handling current node
	I0420 00:20:40.388762       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0420 00:20:40.388772       1 main.go:250] Node ha-371738-m02 has CIDR [10.244.1.0/24] 
	I0420 00:20:40.388883       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0420 00:20:40.388920       1 main.go:250] Node ha-371738-m03 has CIDR [10.244.2.0/24] 
	I0420 00:20:40.388970       1 main.go:223] Handling node with IPs: map[192.168.39.61:{}]
	I0420 00:20:40.388975       1 main.go:250] Node ha-371738-m04 has CIDR [10.244.3.0/24] 
	I0420 00:20:50.396355       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0420 00:20:50.396400       1 main.go:227] handling current node
	I0420 00:20:50.396410       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0420 00:20:50.396416       1 main.go:250] Node ha-371738-m02 has CIDR [10.244.1.0/24] 
	I0420 00:20:50.396537       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0420 00:20:50.396568       1 main.go:250] Node ha-371738-m03 has CIDR [10.244.2.0/24] 
	I0420 00:20:50.396616       1 main.go:223] Handling node with IPs: map[192.168.39.61:{}]
	I0420 00:20:50.396622       1 main.go:250] Node ha-371738-m04 has CIDR [10.244.3.0/24] 
	I0420 00:21:00.406589       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0420 00:21:00.406666       1 main.go:227] handling current node
	I0420 00:21:00.406689       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0420 00:21:00.406696       1 main.go:250] Node ha-371738-m02 has CIDR [10.244.1.0/24] 
	I0420 00:21:00.406824       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0420 00:21:00.406862       1 main.go:250] Node ha-371738-m03 has CIDR [10.244.2.0/24] 
	I0420 00:21:00.406919       1 main.go:223] Handling node with IPs: map[192.168.39.61:{}]
	I0420 00:21:00.406927       1 main.go:250] Node ha-371738-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c0cd3108e73ec5bd7f90cb4fd3f619ba5cc28c85b3d9801577acddf5ec223370] <==
	W0420 00:14:52.056875       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217]
	I0420 00:14:52.058205       1 controller.go:615] quota admission added evaluator for: endpoints
	I0420 00:14:52.062880       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0420 00:14:52.265684       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0420 00:14:56.065924       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0420 00:14:56.135042       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0420 00:14:56.153825       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0420 00:15:07.046369       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0420 00:15:07.292206       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0420 00:17:37.427043       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41014: use of closed network connection
	E0420 00:17:37.656020       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41028: use of closed network connection
	E0420 00:17:37.874786       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41058: use of closed network connection
	E0420 00:17:38.075685       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41076: use of closed network connection
	E0420 00:17:38.280325       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41102: use of closed network connection
	E0420 00:17:38.520683       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41134: use of closed network connection
	E0420 00:17:38.751463       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41144: use of closed network connection
	E0420 00:17:38.954683       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41162: use of closed network connection
	E0420 00:17:39.164406       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41178: use of closed network connection
	E0420 00:17:39.482200       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41210: use of closed network connection
	E0420 00:17:39.686439       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41232: use of closed network connection
	E0420 00:17:39.891351       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41248: use of closed network connection
	E0420 00:17:40.091882       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41264: use of closed network connection
	E0420 00:17:40.304345       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41276: use of closed network connection
	E0420 00:17:40.511219       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41300: use of closed network connection
	W0420 00:18:52.069494       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.253]
	
	
	==> kube-controller-manager [f163f250149afb625b34cc67c2a85b657a6c38717a194973b7406caf8b71afdb] <==
	I0420 00:17:10.499789       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-371738-m03\" does not exist"
	I0420 00:17:10.575576       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-371738-m03" podCIDRs=["10.244.2.0/24"]
	I0420 00:17:11.592960       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-371738-m03"
	I0420 00:17:34.756493       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="121.442733ms"
	I0420 00:17:34.807658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.006278ms"
	I0420 00:17:34.808188       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="209.589µs"
	I0420 00:17:34.915475       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.681066ms"
	I0420 00:17:35.260709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="345.061388ms"
	E0420 00:17:35.260954       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0420 00:17:35.319051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.288409ms"
	I0420 00:17:35.319573       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="302.179µs"
	I0420 00:17:36.014558       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.107µs"
	I0420 00:17:36.744156       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.643346ms"
	I0420 00:17:36.744278       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.315µs"
	I0420 00:17:36.821605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.429714ms"
	I0420 00:17:36.821809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.1µs"
	I0420 00:17:36.881374       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.949318ms"
	I0420 00:17:36.881513       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.966µs"
	I0420 00:18:14.293494       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-371738-m04\" does not exist"
	I0420 00:18:14.339520       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-371738-m04" podCIDRs=["10.244.3.0/24"]
	I0420 00:18:16.634871       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-371738-m04"
	I0420 00:18:22.508085       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-371738-m04"
	I0420 00:19:25.089895       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-371738-m04"
	I0420 00:19:25.195440       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.932878ms"
	I0420 00:19:25.195672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.218µs"
	
	
	==> kube-proxy [484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c] <==
	I0420 00:15:08.888287       1 server_linux.go:69] "Using iptables proxy"
	I0420 00:15:08.912769       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.217"]
	I0420 00:15:08.990690       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 00:15:08.990752       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 00:15:08.990775       1 server_linux.go:165] "Using iptables Proxier"
	I0420 00:15:08.995343       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 00:15:08.998186       1 server.go:872] "Version info" version="v1.30.0"
	I0420 00:15:08.998279       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:15:09.000576       1 config.go:192] "Starting service config controller"
	I0420 00:15:09.000623       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 00:15:09.000648       1 config.go:101] "Starting endpoint slice config controller"
	I0420 00:15:09.000652       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 00:15:09.001503       1 config.go:319] "Starting node config controller"
	I0420 00:15:09.001549       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 00:15:09.100821       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 00:15:09.100919       1 shared_informer.go:320] Caches are synced for service config
	I0420 00:15:09.102329       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9] <==
	W0420 00:14:51.208337       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 00:14:51.208493       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 00:14:51.222566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0420 00:14:51.222594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0420 00:14:51.232647       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 00:14:51.232706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 00:14:51.328436       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0420 00:14:51.328519       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0420 00:14:51.361732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0420 00:14:51.361814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0420 00:14:51.485854       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0420 00:14:51.485939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0420 00:14:51.589699       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 00:14:51.589796       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 00:14:51.602928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 00:14:51.603046       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0420 00:14:54.721944       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0420 00:17:10.621678       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ph4sb\": pod kindnet-ph4sb is already assigned to node \"ha-371738-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-ph4sb" node="ha-371738-m03"
	E0420 00:17:10.622142       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod d0786a22-e08e-4924-93b1-d8f3f34c9da7(kube-system/kindnet-ph4sb) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ph4sb"
	E0420 00:17:10.622424       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ph4sb\": pod kindnet-ph4sb is already assigned to node \"ha-371738-m03\"" pod="kube-system/kindnet-ph4sb"
	I0420 00:17:10.622531       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ph4sb" node="ha-371738-m03"
	E0420 00:18:14.434674       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mkslx\": pod kube-proxy-mkslx is already assigned to node \"ha-371738-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mkslx" node="ha-371738-m04"
	E0420 00:18:14.437803       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod e8ec373f-b6a5-4a0e-b0c2-51125d8da4f8(kube-system/kube-proxy-mkslx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-mkslx"
	E0420 00:18:14.437883       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mkslx\": pod kube-proxy-mkslx is already assigned to node \"ha-371738-m04\"" pod="kube-system/kube-proxy-mkslx"
	I0420 00:18:14.437935       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-mkslx" node="ha-371738-m04"
	
	
	==> kubelet <==
	Apr 20 00:16:56 ha-371738 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:16:56 ha-371738 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:16:56 ha-371738 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:17:34 ha-371738 kubelet[1375]: I0420 00:17:34.744976    1375 topology_manager.go:215] "Topology Admit Handler" podUID="c53b85d0-fb09-4f4a-994b-650454a591e9" podNamespace="default" podName="busybox-fc5497c4f-f8cxz"
	Apr 20 00:17:34 ha-371738 kubelet[1375]: I0420 00:17:34.791210    1375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5p9n\" (UniqueName: \"kubernetes.io/projected/c53b85d0-fb09-4f4a-994b-650454a591e9-kube-api-access-j5p9n\") pod \"busybox-fc5497c4f-f8cxz\" (UID: \"c53b85d0-fb09-4f4a-994b-650454a591e9\") " pod="default/busybox-fc5497c4f-f8cxz"
	Apr 20 00:17:56 ha-371738 kubelet[1375]: E0420 00:17:56.017702    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:17:56 ha-371738 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:17:56 ha-371738 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:17:56 ha-371738 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:17:56 ha-371738 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:18:56 ha-371738 kubelet[1375]: E0420 00:18:56.021358    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:18:56 ha-371738 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:18:56 ha-371738 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:18:56 ha-371738 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:18:56 ha-371738 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:19:56 ha-371738 kubelet[1375]: E0420 00:19:56.017277    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:19:56 ha-371738 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:19:56 ha-371738 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:19:56 ha-371738 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:19:56 ha-371738 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:20:56 ha-371738 kubelet[1375]: E0420 00:20:56.016277    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:20:56 ha-371738 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:20:56 ha-371738 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:20:56 ha-371738 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:20:56 ha-371738 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-371738 -n ha-371738
helpers_test.go:261: (dbg) Run:  kubectl --context ha-371738 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (56.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr: exit status 3 (3.183483748s)

                                                
                                                
-- stdout --
	ha-371738
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-371738-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 00:21:09.491645   99381 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:21:09.492073   99381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:21:09.492135   99381 out.go:304] Setting ErrFile to fd 2...
	I0420 00:21:09.492154   99381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:21:09.492631   99381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:21:09.492953   99381 out.go:298] Setting JSON to false
	I0420 00:21:09.493050   99381 mustload.go:65] Loading cluster: ha-371738
	I0420 00:21:09.493156   99381 notify.go:220] Checking for updates...
	I0420 00:21:09.493803   99381 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:21:09.493826   99381 status.go:255] checking status of ha-371738 ...
	I0420 00:21:09.494221   99381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:09.494274   99381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:09.514548   99381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34181
	I0420 00:21:09.514996   99381 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:09.515571   99381 main.go:141] libmachine: Using API Version  1
	I0420 00:21:09.515590   99381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:09.515923   99381 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:09.516140   99381 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:21:09.518017   99381 status.go:330] ha-371738 host status = "Running" (err=<nil>)
	I0420 00:21:09.518042   99381 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:21:09.518357   99381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:09.518406   99381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:09.532555   99381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33675
	I0420 00:21:09.532969   99381 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:09.533514   99381 main.go:141] libmachine: Using API Version  1
	I0420 00:21:09.533545   99381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:09.533909   99381 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:09.534092   99381 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:21:09.536917   99381 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:09.537358   99381 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:21:09.537389   99381 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:09.537557   99381 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:21:09.537926   99381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:09.537966   99381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:09.552217   99381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I0420 00:21:09.552620   99381 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:09.553094   99381 main.go:141] libmachine: Using API Version  1
	I0420 00:21:09.553123   99381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:09.553511   99381 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:09.553699   99381 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:21:09.553908   99381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:09.553944   99381 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:21:09.556493   99381 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:09.556967   99381 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:21:09.556992   99381 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:09.557145   99381 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:21:09.557326   99381 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:21:09.557470   99381 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:21:09.557626   99381 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:21:09.642656   99381 ssh_runner.go:195] Run: systemctl --version
	I0420 00:21:09.649205   99381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:09.666788   99381 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:21:09.666817   99381 api_server.go:166] Checking apiserver status ...
	I0420 00:21:09.666848   99381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:21:09.683140   99381 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup
	W0420 00:21:09.695139   99381 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:21:09.695197   99381 ssh_runner.go:195] Run: ls
	I0420 00:21:09.700196   99381 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:21:09.704623   99381 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:21:09.704645   99381 status.go:422] ha-371738 apiserver status = Running (err=<nil>)
	I0420 00:21:09.704658   99381 status.go:257] ha-371738 status: &{Name:ha-371738 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:21:09.704688   99381 status.go:255] checking status of ha-371738-m02 ...
	I0420 00:21:09.704965   99381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:09.705012   99381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:09.721355   99381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45543
	I0420 00:21:09.721773   99381 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:09.722241   99381 main.go:141] libmachine: Using API Version  1
	I0420 00:21:09.722256   99381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:09.722571   99381 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:09.722771   99381 main.go:141] libmachine: (ha-371738-m02) Calling .GetState
	I0420 00:21:09.724290   99381 status.go:330] ha-371738-m02 host status = "Running" (err=<nil>)
	I0420 00:21:09.724310   99381 host.go:66] Checking if "ha-371738-m02" exists ...
	I0420 00:21:09.724704   99381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:09.724770   99381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:09.738728   99381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43411
	I0420 00:21:09.739126   99381 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:09.739590   99381 main.go:141] libmachine: Using API Version  1
	I0420 00:21:09.739611   99381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:09.739926   99381 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:09.740223   99381 main.go:141] libmachine: (ha-371738-m02) Calling .GetIP
	I0420 00:21:09.742904   99381 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:09.743364   99381 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:21:09.743391   99381 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:09.743553   99381 host.go:66] Checking if "ha-371738-m02" exists ...
	I0420 00:21:09.743907   99381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:09.743945   99381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:09.757650   99381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45487
	I0420 00:21:09.758057   99381 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:09.758498   99381 main.go:141] libmachine: Using API Version  1
	I0420 00:21:09.758519   99381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:09.758829   99381 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:09.759001   99381 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:21:09.759184   99381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:09.759201   99381 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:21:09.761865   99381 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:09.762319   99381 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:21:09.762354   99381 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:09.762606   99381 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:21:09.762798   99381 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:21:09.762976   99381 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:21:09.763141   99381 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa Username:docker}
	W0420 00:21:12.265629   99381 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.48:22: connect: no route to host
	W0420 00:21:12.265739   99381 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	E0420 00:21:12.265756   99381 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	I0420 00:21:12.265767   99381 status.go:257] ha-371738-m02 status: &{Name:ha-371738-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0420 00:21:12.265786   99381 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	I0420 00:21:12.265793   99381 status.go:255] checking status of ha-371738-m03 ...
	I0420 00:21:12.266094   99381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:12.266136   99381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:12.282642   99381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35125
	I0420 00:21:12.283172   99381 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:12.283676   99381 main.go:141] libmachine: Using API Version  1
	I0420 00:21:12.283700   99381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:12.284297   99381 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:12.284538   99381 main.go:141] libmachine: (ha-371738-m03) Calling .GetState
	I0420 00:21:12.286214   99381 status.go:330] ha-371738-m03 host status = "Running" (err=<nil>)
	I0420 00:21:12.286234   99381 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:21:12.286542   99381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:12.286616   99381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:12.302384   99381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I0420 00:21:12.302853   99381 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:12.303312   99381 main.go:141] libmachine: Using API Version  1
	I0420 00:21:12.303334   99381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:12.303649   99381 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:12.303871   99381 main.go:141] libmachine: (ha-371738-m03) Calling .GetIP
	I0420 00:21:12.307014   99381 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:12.307525   99381 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:21:12.307569   99381 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:12.307688   99381 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:21:12.307989   99381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:12.308025   99381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:12.323954   99381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35223
	I0420 00:21:12.324433   99381 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:12.324863   99381 main.go:141] libmachine: Using API Version  1
	I0420 00:21:12.324889   99381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:12.325234   99381 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:12.325438   99381 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:21:12.325621   99381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:12.325645   99381 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:21:12.328762   99381 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:12.329204   99381 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:21:12.329240   99381 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:12.329373   99381 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:21:12.329528   99381 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:21:12.329717   99381 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:21:12.329986   99381 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:21:12.410686   99381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:12.426385   99381 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:21:12.426418   99381 api_server.go:166] Checking apiserver status ...
	I0420 00:21:12.426449   99381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:21:12.441526   99381 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0420 00:21:12.451897   99381 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:21:12.451960   99381 ssh_runner.go:195] Run: ls
	I0420 00:21:12.456520   99381 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:21:12.464370   99381 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:21:12.464392   99381 status.go:422] ha-371738-m03 apiserver status = Running (err=<nil>)
	I0420 00:21:12.464405   99381 status.go:257] ha-371738-m03 status: &{Name:ha-371738-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:21:12.464423   99381 status.go:255] checking status of ha-371738-m04 ...
	I0420 00:21:12.464704   99381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:12.464750   99381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:12.481176   99381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
	I0420 00:21:12.481686   99381 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:12.482162   99381 main.go:141] libmachine: Using API Version  1
	I0420 00:21:12.482189   99381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:12.482523   99381 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:12.482692   99381 main.go:141] libmachine: (ha-371738-m04) Calling .GetState
	I0420 00:21:12.484257   99381 status.go:330] ha-371738-m04 host status = "Running" (err=<nil>)
	I0420 00:21:12.484271   99381 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:21:12.484550   99381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:12.484607   99381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:12.499815   99381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44029
	I0420 00:21:12.500272   99381 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:12.500752   99381 main.go:141] libmachine: Using API Version  1
	I0420 00:21:12.500772   99381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:12.501113   99381 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:12.501297   99381 main.go:141] libmachine: (ha-371738-m04) Calling .GetIP
	I0420 00:21:12.503819   99381 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:12.504202   99381 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:21:12.504238   99381 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:12.504397   99381 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:21:12.504661   99381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:12.504699   99381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:12.518378   99381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45081
	I0420 00:21:12.518773   99381 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:12.519170   99381 main.go:141] libmachine: Using API Version  1
	I0420 00:21:12.519202   99381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:12.519480   99381 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:12.519671   99381 main.go:141] libmachine: (ha-371738-m04) Calling .DriverName
	I0420 00:21:12.519847   99381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:12.519875   99381 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHHostname
	I0420 00:21:12.522378   99381 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:12.522782   99381 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:21:12.522811   99381 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:12.522946   99381 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHPort
	I0420 00:21:12.523084   99381 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHKeyPath
	I0420 00:21:12.523230   99381 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHUsername
	I0420 00:21:12.523380   99381 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m04/id_rsa Username:docker}
	I0420 00:21:12.606208   99381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:12.621641   99381 status.go:257] ha-371738-m04 status: &{Name:ha-371738-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr: exit status 3 (5.47494099s)

                                                
                                                
-- stdout --
	ha-371738
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-371738-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 00:21:13.349874   99481 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:21:13.350166   99481 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:21:13.350179   99481 out.go:304] Setting ErrFile to fd 2...
	I0420 00:21:13.350185   99481 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:21:13.350369   99481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:21:13.350537   99481 out.go:298] Setting JSON to false
	I0420 00:21:13.350561   99481 mustload.go:65] Loading cluster: ha-371738
	I0420 00:21:13.350615   99481 notify.go:220] Checking for updates...
	I0420 00:21:13.350984   99481 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:21:13.351003   99481 status.go:255] checking status of ha-371738 ...
	I0420 00:21:13.351369   99481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:13.351439   99481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:13.370573   99481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38163
	I0420 00:21:13.371025   99481 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:13.371839   99481 main.go:141] libmachine: Using API Version  1
	I0420 00:21:13.371894   99481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:13.372255   99481 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:13.372476   99481 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:21:13.374265   99481 status.go:330] ha-371738 host status = "Running" (err=<nil>)
	I0420 00:21:13.374282   99481 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:21:13.374636   99481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:13.374684   99481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:13.389118   99481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43641
	I0420 00:21:13.389547   99481 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:13.390002   99481 main.go:141] libmachine: Using API Version  1
	I0420 00:21:13.390024   99481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:13.390325   99481 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:13.390539   99481 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:21:13.393164   99481 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:13.393575   99481 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:21:13.393596   99481 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:13.393787   99481 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:21:13.394139   99481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:13.394187   99481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:13.408720   99481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33033
	I0420 00:21:13.409070   99481 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:13.409536   99481 main.go:141] libmachine: Using API Version  1
	I0420 00:21:13.409560   99481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:13.409822   99481 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:13.410017   99481 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:21:13.410238   99481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:13.410258   99481 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:21:13.412768   99481 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:13.413236   99481 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:21:13.413274   99481 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:13.413454   99481 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:21:13.413593   99481 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:21:13.413743   99481 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:21:13.413840   99481 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:21:13.505937   99481 ssh_runner.go:195] Run: systemctl --version
	I0420 00:21:13.513150   99481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:13.530937   99481 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:21:13.530970   99481 api_server.go:166] Checking apiserver status ...
	I0420 00:21:13.531004   99481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:21:13.557819   99481 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup
	W0420 00:21:13.571304   99481 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:21:13.571363   99481 ssh_runner.go:195] Run: ls
	I0420 00:21:13.577243   99481 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:21:13.582389   99481 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:21:13.582411   99481 status.go:422] ha-371738 apiserver status = Running (err=<nil>)
	I0420 00:21:13.582421   99481 status.go:257] ha-371738 status: &{Name:ha-371738 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:21:13.582437   99481 status.go:255] checking status of ha-371738-m02 ...
	I0420 00:21:13.582857   99481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:13.582894   99481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:13.598828   99481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0420 00:21:13.599231   99481 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:13.599751   99481 main.go:141] libmachine: Using API Version  1
	I0420 00:21:13.599771   99481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:13.600133   99481 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:13.600344   99481 main.go:141] libmachine: (ha-371738-m02) Calling .GetState
	I0420 00:21:13.602038   99481 status.go:330] ha-371738-m02 host status = "Running" (err=<nil>)
	I0420 00:21:13.602055   99481 host.go:66] Checking if "ha-371738-m02" exists ...
	I0420 00:21:13.602319   99481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:13.602358   99481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:13.618113   99481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37671
	I0420 00:21:13.618511   99481 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:13.619080   99481 main.go:141] libmachine: Using API Version  1
	I0420 00:21:13.619104   99481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:13.619434   99481 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:13.619634   99481 main.go:141] libmachine: (ha-371738-m02) Calling .GetIP
	I0420 00:21:13.622647   99481 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:13.623129   99481 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:21:13.623155   99481 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:13.623367   99481 host.go:66] Checking if "ha-371738-m02" exists ...
	I0420 00:21:13.623691   99481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:13.623742   99481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:13.638737   99481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40491
	I0420 00:21:13.639360   99481 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:13.639968   99481 main.go:141] libmachine: Using API Version  1
	I0420 00:21:13.639988   99481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:13.641403   99481 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:13.641827   99481 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:21:13.642064   99481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:13.642088   99481 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:21:13.644689   99481 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:13.645025   99481 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:21:13.645046   99481 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:13.645204   99481 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:21:13.645390   99481 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:21:13.645544   99481 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:21:13.645710   99481 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa Username:docker}
	W0420 00:21:15.337613   99481 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.48:22: connect: no route to host
	I0420 00:21:15.337680   99481 retry.go:31] will retry after 151.297997ms: dial tcp 192.168.39.48:22: connect: no route to host
	W0420 00:21:18.409545   99481 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.48:22: connect: no route to host
	W0420 00:21:18.409682   99481 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	E0420 00:21:18.409709   99481 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	I0420 00:21:18.409723   99481 status.go:257] ha-371738-m02 status: &{Name:ha-371738-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0420 00:21:18.409744   99481 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	I0420 00:21:18.409753   99481 status.go:255] checking status of ha-371738-m03 ...
	I0420 00:21:18.410175   99481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:18.410283   99481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:18.425741   99481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39261
	I0420 00:21:18.426215   99481 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:18.426747   99481 main.go:141] libmachine: Using API Version  1
	I0420 00:21:18.426770   99481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:18.427149   99481 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:18.427399   99481 main.go:141] libmachine: (ha-371738-m03) Calling .GetState
	I0420 00:21:18.429303   99481 status.go:330] ha-371738-m03 host status = "Running" (err=<nil>)
	I0420 00:21:18.429336   99481 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:21:18.429692   99481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:18.429765   99481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:18.444409   99481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41671
	I0420 00:21:18.444790   99481 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:18.445268   99481 main.go:141] libmachine: Using API Version  1
	I0420 00:21:18.445289   99481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:18.445636   99481 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:18.445807   99481 main.go:141] libmachine: (ha-371738-m03) Calling .GetIP
	I0420 00:21:18.448277   99481 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:18.448696   99481 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:21:18.448716   99481 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:18.448887   99481 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:21:18.449188   99481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:18.449222   99481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:18.463113   99481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37919
	I0420 00:21:18.463505   99481 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:18.463948   99481 main.go:141] libmachine: Using API Version  1
	I0420 00:21:18.463971   99481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:18.464289   99481 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:18.464491   99481 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:21:18.464673   99481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:18.464692   99481 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:21:18.467038   99481 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:18.467449   99481 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:21:18.467483   99481 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:18.467634   99481 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:21:18.467805   99481 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:21:18.468008   99481 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:21:18.468178   99481 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:21:18.551146   99481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:18.570308   99481 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:21:18.570340   99481 api_server.go:166] Checking apiserver status ...
	I0420 00:21:18.570376   99481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:21:18.587680   99481 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0420 00:21:18.599601   99481 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:21:18.599648   99481 ssh_runner.go:195] Run: ls
	I0420 00:21:18.605172   99481 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:21:18.609576   99481 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:21:18.609612   99481 status.go:422] ha-371738-m03 apiserver status = Running (err=<nil>)
	I0420 00:21:18.609631   99481 status.go:257] ha-371738-m03 status: &{Name:ha-371738-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:21:18.609651   99481 status.go:255] checking status of ha-371738-m04 ...
	I0420 00:21:18.609985   99481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:18.610023   99481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:18.625671   99481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0420 00:21:18.626067   99481 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:18.626514   99481 main.go:141] libmachine: Using API Version  1
	I0420 00:21:18.626542   99481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:18.626840   99481 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:18.627042   99481 main.go:141] libmachine: (ha-371738-m04) Calling .GetState
	I0420 00:21:18.628479   99481 status.go:330] ha-371738-m04 host status = "Running" (err=<nil>)
	I0420 00:21:18.628503   99481 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:21:18.628820   99481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:18.628864   99481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:18.643975   99481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45959
	I0420 00:21:18.644343   99481 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:18.644793   99481 main.go:141] libmachine: Using API Version  1
	I0420 00:21:18.644815   99481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:18.645123   99481 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:18.645396   99481 main.go:141] libmachine: (ha-371738-m04) Calling .GetIP
	I0420 00:21:18.648225   99481 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:18.648694   99481 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:21:18.648724   99481 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:18.648883   99481 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:21:18.649187   99481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:18.649222   99481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:18.664246   99481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32939
	I0420 00:21:18.664611   99481 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:18.665098   99481 main.go:141] libmachine: Using API Version  1
	I0420 00:21:18.665119   99481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:18.665479   99481 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:18.665707   99481 main.go:141] libmachine: (ha-371738-m04) Calling .DriverName
	I0420 00:21:18.665916   99481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:18.665941   99481 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHHostname
	I0420 00:21:18.668406   99481 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:18.668823   99481 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:21:18.668862   99481 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:18.668964   99481 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHPort
	I0420 00:21:18.669146   99481 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHKeyPath
	I0420 00:21:18.669278   99481 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHUsername
	I0420 00:21:18.669423   99481 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m04/id_rsa Username:docker}
	I0420 00:21:18.749560   99481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:18.767115   99481 status.go:257] ha-371738-m04 status: &{Name:ha-371738-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr: exit status 3 (4.78830744s)

                                                
                                                
-- stdout --
	ha-371738
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-371738-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 00:21:20.304743   99598 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:21:20.305240   99598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:21:20.305295   99598 out.go:304] Setting ErrFile to fd 2...
	I0420 00:21:20.305338   99598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:21:20.305824   99598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:21:20.306375   99598 out.go:298] Setting JSON to false
	I0420 00:21:20.306421   99598 mustload.go:65] Loading cluster: ha-371738
	I0420 00:21:20.306525   99598 notify.go:220] Checking for updates...
	I0420 00:21:20.306800   99598 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:21:20.306815   99598 status.go:255] checking status of ha-371738 ...
	I0420 00:21:20.307223   99598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:20.307266   99598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:20.323432   99598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41443
	I0420 00:21:20.323877   99598 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:20.324519   99598 main.go:141] libmachine: Using API Version  1
	I0420 00:21:20.324547   99598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:20.325075   99598 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:20.325286   99598 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:21:20.327004   99598 status.go:330] ha-371738 host status = "Running" (err=<nil>)
	I0420 00:21:20.327035   99598 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:21:20.327352   99598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:20.327402   99598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:20.343174   99598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45547
	I0420 00:21:20.343593   99598 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:20.344062   99598 main.go:141] libmachine: Using API Version  1
	I0420 00:21:20.344082   99598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:20.344456   99598 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:20.344636   99598 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:21:20.347758   99598 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:20.348201   99598 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:21:20.348222   99598 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:20.348378   99598 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:21:20.348700   99598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:20.348739   99598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:20.363785   99598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0420 00:21:20.364169   99598 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:20.364585   99598 main.go:141] libmachine: Using API Version  1
	I0420 00:21:20.364610   99598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:20.364896   99598 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:20.365085   99598 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:21:20.365257   99598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:20.365301   99598 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:21:20.367911   99598 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:20.368343   99598 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:21:20.368368   99598 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:20.368513   99598 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:21:20.368658   99598 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:21:20.368766   99598 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:21:20.368919   99598 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:21:20.460458   99598 ssh_runner.go:195] Run: systemctl --version
	I0420 00:21:20.471994   99598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:20.490137   99598 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:21:20.490173   99598 api_server.go:166] Checking apiserver status ...
	I0420 00:21:20.490215   99598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:21:20.504533   99598 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup
	W0420 00:21:20.516153   99598 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:21:20.516197   99598 ssh_runner.go:195] Run: ls
	I0420 00:21:20.520865   99598 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:21:20.526941   99598 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:21:20.526962   99598 status.go:422] ha-371738 apiserver status = Running (err=<nil>)
	I0420 00:21:20.526972   99598 status.go:257] ha-371738 status: &{Name:ha-371738 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:21:20.526987   99598 status.go:255] checking status of ha-371738-m02 ...
	I0420 00:21:20.527266   99598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:20.527300   99598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:20.542856   99598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0420 00:21:20.543258   99598 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:20.543763   99598 main.go:141] libmachine: Using API Version  1
	I0420 00:21:20.543789   99598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:20.544171   99598 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:20.544363   99598 main.go:141] libmachine: (ha-371738-m02) Calling .GetState
	I0420 00:21:20.545912   99598 status.go:330] ha-371738-m02 host status = "Running" (err=<nil>)
	I0420 00:21:20.545941   99598 host.go:66] Checking if "ha-371738-m02" exists ...
	I0420 00:21:20.546314   99598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:20.546357   99598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:20.560639   99598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36927
	I0420 00:21:20.561016   99598 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:20.561467   99598 main.go:141] libmachine: Using API Version  1
	I0420 00:21:20.561492   99598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:20.561796   99598 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:20.561974   99598 main.go:141] libmachine: (ha-371738-m02) Calling .GetIP
	I0420 00:21:20.564914   99598 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:20.565330   99598 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:21:20.565356   99598 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:20.565470   99598 host.go:66] Checking if "ha-371738-m02" exists ...
	I0420 00:21:20.565850   99598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:20.565892   99598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:20.579636   99598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0420 00:21:20.580029   99598 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:20.580516   99598 main.go:141] libmachine: Using API Version  1
	I0420 00:21:20.580533   99598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:20.580818   99598 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:20.581054   99598 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:21:20.581237   99598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:20.581261   99598 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:21:20.583871   99598 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:20.584640   99598 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:21:20.584671   99598 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:20.584792   99598 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:21:20.585001   99598 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:21:20.585169   99598 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:21:20.585296   99598 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa Username:docker}
	W0420 00:21:21.481552   99598 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.48:22: connect: no route to host
	I0420 00:21:21.481626   99598 retry.go:31] will retry after 143.5388ms: dial tcp 192.168.39.48:22: connect: no route to host
	W0420 00:21:24.681537   99598 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.48:22: connect: no route to host
	W0420 00:21:24.681621   99598 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	E0420 00:21:24.681634   99598 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	I0420 00:21:24.681641   99598 status.go:257] ha-371738-m02 status: &{Name:ha-371738-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0420 00:21:24.681673   99598 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	I0420 00:21:24.681681   99598 status.go:255] checking status of ha-371738-m03 ...
	I0420 00:21:24.681995   99598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:24.682055   99598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:24.698089   99598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42557
	I0420 00:21:24.698493   99598 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:24.699051   99598 main.go:141] libmachine: Using API Version  1
	I0420 00:21:24.699091   99598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:24.699425   99598 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:24.699603   99598 main.go:141] libmachine: (ha-371738-m03) Calling .GetState
	I0420 00:21:24.701743   99598 status.go:330] ha-371738-m03 host status = "Running" (err=<nil>)
	I0420 00:21:24.701761   99598 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:21:24.702179   99598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:24.702224   99598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:24.717932   99598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42269
	I0420 00:21:24.718385   99598 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:24.718806   99598 main.go:141] libmachine: Using API Version  1
	I0420 00:21:24.718830   99598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:24.719215   99598 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:24.719422   99598 main.go:141] libmachine: (ha-371738-m03) Calling .GetIP
	I0420 00:21:24.722104   99598 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:24.722564   99598 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:21:24.722601   99598 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:24.722721   99598 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:21:24.723013   99598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:24.723063   99598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:24.738294   99598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45069
	I0420 00:21:24.738666   99598 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:24.739104   99598 main.go:141] libmachine: Using API Version  1
	I0420 00:21:24.739129   99598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:24.739469   99598 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:24.739678   99598 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:21:24.739887   99598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:24.739911   99598 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:21:24.742637   99598 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:24.743159   99598 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:21:24.743192   99598 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:24.743359   99598 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:21:24.743545   99598 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:21:24.743715   99598 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:21:24.743891   99598 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:21:24.829393   99598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:24.845235   99598 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:21:24.845264   99598 api_server.go:166] Checking apiserver status ...
	I0420 00:21:24.845336   99598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:21:24.860719   99598 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0420 00:21:24.871287   99598 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:21:24.871342   99598 ssh_runner.go:195] Run: ls
	I0420 00:21:24.876653   99598 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:21:24.883550   99598 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:21:24.883573   99598 status.go:422] ha-371738-m03 apiserver status = Running (err=<nil>)
	I0420 00:21:24.883582   99598 status.go:257] ha-371738-m03 status: &{Name:ha-371738-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:21:24.883597   99598 status.go:255] checking status of ha-371738-m04 ...
	I0420 00:21:24.883908   99598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:24.883944   99598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:24.898868   99598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41369
	I0420 00:21:24.899343   99598 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:24.899825   99598 main.go:141] libmachine: Using API Version  1
	I0420 00:21:24.899852   99598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:24.900188   99598 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:24.900368   99598 main.go:141] libmachine: (ha-371738-m04) Calling .GetState
	I0420 00:21:24.901773   99598 status.go:330] ha-371738-m04 host status = "Running" (err=<nil>)
	I0420 00:21:24.901789   99598 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:21:24.902124   99598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:24.902178   99598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:24.918625   99598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44925
	I0420 00:21:24.919016   99598 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:24.919401   99598 main.go:141] libmachine: Using API Version  1
	I0420 00:21:24.919420   99598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:24.919731   99598 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:24.919936   99598 main.go:141] libmachine: (ha-371738-m04) Calling .GetIP
	I0420 00:21:24.922281   99598 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:24.922714   99598 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:21:24.922752   99598 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:24.922871   99598 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:21:24.923265   99598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:24.923312   99598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:24.937550   99598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40871
	I0420 00:21:24.937933   99598 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:24.938350   99598 main.go:141] libmachine: Using API Version  1
	I0420 00:21:24.938372   99598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:24.938701   99598 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:24.938862   99598 main.go:141] libmachine: (ha-371738-m04) Calling .DriverName
	I0420 00:21:24.939007   99598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:24.939025   99598 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHHostname
	I0420 00:21:24.941760   99598 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:24.942218   99598 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:21:24.942241   99598 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:24.942394   99598 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHPort
	I0420 00:21:24.942560   99598 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHKeyPath
	I0420 00:21:24.942711   99598 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHUsername
	I0420 00:21:24.942846   99598 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m04/id_rsa Username:docker}
	I0420 00:21:25.021682   99598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:25.037890   99598 status.go:257] ha-371738-m04 status: &{Name:ha-371738-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr: exit status 3 (3.736915838s)

                                                
                                                
-- stdout --
	ha-371738
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-371738-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 00:21:27.618109   99698 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:21:27.618336   99698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:21:27.618345   99698 out.go:304] Setting ErrFile to fd 2...
	I0420 00:21:27.618350   99698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:21:27.618532   99698 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:21:27.618701   99698 out.go:298] Setting JSON to false
	I0420 00:21:27.618725   99698 mustload.go:65] Loading cluster: ha-371738
	I0420 00:21:27.618782   99698 notify.go:220] Checking for updates...
	I0420 00:21:27.619115   99698 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:21:27.619132   99698 status.go:255] checking status of ha-371738 ...
	I0420 00:21:27.619561   99698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:27.619635   99698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:27.639323   99698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41527
	I0420 00:21:27.639758   99698 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:27.640512   99698 main.go:141] libmachine: Using API Version  1
	I0420 00:21:27.640560   99698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:27.640976   99698 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:27.641203   99698 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:21:27.642683   99698 status.go:330] ha-371738 host status = "Running" (err=<nil>)
	I0420 00:21:27.642701   99698 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:21:27.643050   99698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:27.643097   99698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:27.657991   99698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0420 00:21:27.658365   99698 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:27.658809   99698 main.go:141] libmachine: Using API Version  1
	I0420 00:21:27.658825   99698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:27.659121   99698 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:27.659345   99698 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:21:27.662158   99698 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:27.662565   99698 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:21:27.662589   99698 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:27.662753   99698 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:21:27.663065   99698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:27.663118   99698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:27.677865   99698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43103
	I0420 00:21:27.678208   99698 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:27.678701   99698 main.go:141] libmachine: Using API Version  1
	I0420 00:21:27.678724   99698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:27.679017   99698 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:27.679198   99698 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:21:27.679394   99698 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:27.679428   99698 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:21:27.682119   99698 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:27.682524   99698 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:21:27.682551   99698 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:27.682712   99698 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:21:27.682901   99698 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:21:27.683103   99698 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:21:27.683284   99698 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:21:27.774042   99698 ssh_runner.go:195] Run: systemctl --version
	I0420 00:21:27.781013   99698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:27.801876   99698 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:21:27.801906   99698 api_server.go:166] Checking apiserver status ...
	I0420 00:21:27.801940   99698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:21:27.820031   99698 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup
	W0420 00:21:27.831703   99698 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:21:27.831760   99698 ssh_runner.go:195] Run: ls
	I0420 00:21:27.836513   99698 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:21:27.841216   99698 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:21:27.841236   99698 status.go:422] ha-371738 apiserver status = Running (err=<nil>)
	I0420 00:21:27.841249   99698 status.go:257] ha-371738 status: &{Name:ha-371738 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:21:27.841285   99698 status.go:255] checking status of ha-371738-m02 ...
	I0420 00:21:27.841649   99698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:27.841689   99698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:27.856538   99698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46427
	I0420 00:21:27.856908   99698 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:27.857366   99698 main.go:141] libmachine: Using API Version  1
	I0420 00:21:27.857394   99698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:27.857767   99698 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:27.857972   99698 main.go:141] libmachine: (ha-371738-m02) Calling .GetState
	I0420 00:21:27.859634   99698 status.go:330] ha-371738-m02 host status = "Running" (err=<nil>)
	I0420 00:21:27.859656   99698 host.go:66] Checking if "ha-371738-m02" exists ...
	I0420 00:21:27.860056   99698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:27.860132   99698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:27.874376   99698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38543
	I0420 00:21:27.874783   99698 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:27.875214   99698 main.go:141] libmachine: Using API Version  1
	I0420 00:21:27.875238   99698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:27.875606   99698 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:27.875778   99698 main.go:141] libmachine: (ha-371738-m02) Calling .GetIP
	I0420 00:21:27.878430   99698 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:27.878877   99698 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:21:27.878904   99698 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:27.879053   99698 host.go:66] Checking if "ha-371738-m02" exists ...
	I0420 00:21:27.879464   99698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:27.879505   99698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:27.894057   99698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40725
	I0420 00:21:27.894428   99698 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:27.894826   99698 main.go:141] libmachine: Using API Version  1
	I0420 00:21:27.894842   99698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:27.895582   99698 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:27.895903   99698 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:21:27.896141   99698 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:27.896166   99698 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:21:27.899604   99698 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:27.900098   99698 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:21:27.900135   99698 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:27.900280   99698 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:21:27.900430   99698 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:21:27.900595   99698 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:21:27.900736   99698 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa Username:docker}
	W0420 00:21:30.953575   99698 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.48:22: connect: no route to host
	W0420 00:21:30.953706   99698 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	E0420 00:21:30.953735   99698 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	I0420 00:21:30.953743   99698 status.go:257] ha-371738-m02 status: &{Name:ha-371738-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0420 00:21:30.953763   99698 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	I0420 00:21:30.953776   99698 status.go:255] checking status of ha-371738-m03 ...
	I0420 00:21:30.954111   99698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:30.954171   99698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:30.969588   99698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40189
	I0420 00:21:30.970086   99698 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:30.970586   99698 main.go:141] libmachine: Using API Version  1
	I0420 00:21:30.970621   99698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:30.970951   99698 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:30.971164   99698 main.go:141] libmachine: (ha-371738-m03) Calling .GetState
	I0420 00:21:30.972641   99698 status.go:330] ha-371738-m03 host status = "Running" (err=<nil>)
	I0420 00:21:30.972656   99698 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:21:30.972938   99698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:30.972986   99698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:30.987210   99698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0420 00:21:30.987645   99698 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:30.988213   99698 main.go:141] libmachine: Using API Version  1
	I0420 00:21:30.988240   99698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:30.988601   99698 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:30.988806   99698 main.go:141] libmachine: (ha-371738-m03) Calling .GetIP
	I0420 00:21:30.991911   99698 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:30.992349   99698 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:21:30.992373   99698 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:30.992517   99698 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:21:30.992787   99698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:30.992826   99698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:31.007174   99698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0420 00:21:31.007577   99698 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:31.008094   99698 main.go:141] libmachine: Using API Version  1
	I0420 00:21:31.008116   99698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:31.008460   99698 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:31.008664   99698 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:21:31.008850   99698 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:31.008868   99698 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:21:31.011635   99698 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:31.012053   99698 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:21:31.012084   99698 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:31.012219   99698 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:21:31.012374   99698 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:21:31.012511   99698 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:21:31.012670   99698 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:21:31.090222   99698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:31.105906   99698 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:21:31.105941   99698 api_server.go:166] Checking apiserver status ...
	I0420 00:21:31.105981   99698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:21:31.119953   99698 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0420 00:21:31.130173   99698 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:21:31.130220   99698 ssh_runner.go:195] Run: ls
	I0420 00:21:31.135856   99698 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:21:31.141184   99698 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:21:31.141209   99698 status.go:422] ha-371738-m03 apiserver status = Running (err=<nil>)
	I0420 00:21:31.141221   99698 status.go:257] ha-371738-m03 status: &{Name:ha-371738-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:21:31.141245   99698 status.go:255] checking status of ha-371738-m04 ...
	I0420 00:21:31.141680   99698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:31.141724   99698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:31.157091   99698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39819
	I0420 00:21:31.157606   99698 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:31.158119   99698 main.go:141] libmachine: Using API Version  1
	I0420 00:21:31.158136   99698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:31.158498   99698 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:31.158695   99698 main.go:141] libmachine: (ha-371738-m04) Calling .GetState
	I0420 00:21:31.160123   99698 status.go:330] ha-371738-m04 host status = "Running" (err=<nil>)
	I0420 00:21:31.160148   99698 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:21:31.160409   99698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:31.160447   99698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:31.175584   99698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42853
	I0420 00:21:31.176024   99698 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:31.176577   99698 main.go:141] libmachine: Using API Version  1
	I0420 00:21:31.176602   99698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:31.176978   99698 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:31.177164   99698 main.go:141] libmachine: (ha-371738-m04) Calling .GetIP
	I0420 00:21:31.179728   99698 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:31.180217   99698 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:21:31.180238   99698 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:31.180416   99698 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:21:31.180808   99698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:31.180854   99698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:31.196421   99698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45977
	I0420 00:21:31.196775   99698 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:31.197188   99698 main.go:141] libmachine: Using API Version  1
	I0420 00:21:31.197211   99698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:31.197529   99698 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:31.197697   99698 main.go:141] libmachine: (ha-371738-m04) Calling .DriverName
	I0420 00:21:31.197862   99698 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:31.197883   99698 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHHostname
	I0420 00:21:31.200841   99698 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:31.201293   99698 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:21:31.201325   99698 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:31.201489   99698 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHPort
	I0420 00:21:31.201662   99698 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHKeyPath
	I0420 00:21:31.201830   99698 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHUsername
	I0420 00:21:31.202021   99698 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m04/id_rsa Username:docker}
	I0420 00:21:31.281598   99698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:31.297273   99698 status.go:257] ha-371738-m04 status: &{Name:ha-371738-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr: exit status 3 (4.151121181s)

                                                
                                                
-- stdout --
	ha-371738
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-371738-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 00:21:33.619034   99799 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:21:33.619155   99799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:21:33.619166   99799 out.go:304] Setting ErrFile to fd 2...
	I0420 00:21:33.619173   99799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:21:33.619340   99799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:21:33.619535   99799 out.go:298] Setting JSON to false
	I0420 00:21:33.619565   99799 mustload.go:65] Loading cluster: ha-371738
	I0420 00:21:33.619685   99799 notify.go:220] Checking for updates...
	I0420 00:21:33.620067   99799 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:21:33.620087   99799 status.go:255] checking status of ha-371738 ...
	I0420 00:21:33.620524   99799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:33.620579   99799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:33.640742   99799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37099
	I0420 00:21:33.641242   99799 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:33.641896   99799 main.go:141] libmachine: Using API Version  1
	I0420 00:21:33.641925   99799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:33.642333   99799 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:33.642562   99799 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:21:33.644279   99799 status.go:330] ha-371738 host status = "Running" (err=<nil>)
	I0420 00:21:33.644300   99799 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:21:33.644592   99799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:33.644626   99799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:33.659600   99799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35579
	I0420 00:21:33.659973   99799 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:33.660419   99799 main.go:141] libmachine: Using API Version  1
	I0420 00:21:33.660438   99799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:33.660770   99799 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:33.660988   99799 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:21:33.663826   99799 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:33.664269   99799 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:21:33.664295   99799 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:33.664439   99799 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:21:33.664760   99799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:33.664810   99799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:33.679574   99799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42761
	I0420 00:21:33.679884   99799 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:33.680324   99799 main.go:141] libmachine: Using API Version  1
	I0420 00:21:33.680344   99799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:33.680673   99799 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:33.680863   99799 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:21:33.681095   99799 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:33.681126   99799 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:21:33.683982   99799 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:33.684373   99799 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:21:33.684412   99799 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:33.684559   99799 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:21:33.684733   99799 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:21:33.684888   99799 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:21:33.685026   99799 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:21:33.772168   99799 ssh_runner.go:195] Run: systemctl --version
	I0420 00:21:33.779113   99799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:33.796584   99799 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:21:33.796631   99799 api_server.go:166] Checking apiserver status ...
	I0420 00:21:33.796682   99799 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:21:33.811700   99799 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup
	W0420 00:21:33.824906   99799 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:21:33.824947   99799 ssh_runner.go:195] Run: ls
	I0420 00:21:33.830082   99799 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:21:33.834347   99799 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:21:33.834367   99799 status.go:422] ha-371738 apiserver status = Running (err=<nil>)
	I0420 00:21:33.834380   99799 status.go:257] ha-371738 status: &{Name:ha-371738 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:21:33.834405   99799 status.go:255] checking status of ha-371738-m02 ...
	I0420 00:21:33.834786   99799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:33.834844   99799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:33.849735   99799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39215
	I0420 00:21:33.850131   99799 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:33.850612   99799 main.go:141] libmachine: Using API Version  1
	I0420 00:21:33.850646   99799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:33.850931   99799 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:33.851099   99799 main.go:141] libmachine: (ha-371738-m02) Calling .GetState
	I0420 00:21:33.852635   99799 status.go:330] ha-371738-m02 host status = "Running" (err=<nil>)
	I0420 00:21:33.852653   99799 host.go:66] Checking if "ha-371738-m02" exists ...
	I0420 00:21:33.852931   99799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:33.852978   99799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:33.868094   99799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I0420 00:21:33.868544   99799 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:33.869033   99799 main.go:141] libmachine: Using API Version  1
	I0420 00:21:33.869058   99799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:33.869428   99799 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:33.869629   99799 main.go:141] libmachine: (ha-371738-m02) Calling .GetIP
	I0420 00:21:33.872176   99799 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:33.872549   99799 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:21:33.872579   99799 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:33.872721   99799 host.go:66] Checking if "ha-371738-m02" exists ...
	I0420 00:21:33.873100   99799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:33.873145   99799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:33.887446   99799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39457
	I0420 00:21:33.887813   99799 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:33.888263   99799 main.go:141] libmachine: Using API Version  1
	I0420 00:21:33.888283   99799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:33.888558   99799 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:33.888731   99799 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:21:33.888919   99799 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:33.888941   99799 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:21:33.891554   99799 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:33.891960   99799 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:21:33.891997   99799 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:33.892112   99799 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:21:33.892307   99799 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:21:33.892432   99799 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:21:33.892558   99799 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa Username:docker}
	W0420 00:21:34.025503   99799 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.48:22: connect: no route to host
	I0420 00:21:34.025549   99799 retry.go:31] will retry after 270.744082ms: dial tcp 192.168.39.48:22: connect: no route to host
	W0420 00:21:37.353571   99799 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.48:22: connect: no route to host
	W0420 00:21:37.353666   99799 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	E0420 00:21:37.353681   99799 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	I0420 00:21:37.353704   99799 status.go:257] ha-371738-m02 status: &{Name:ha-371738-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0420 00:21:37.353732   99799 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	I0420 00:21:37.353739   99799 status.go:255] checking status of ha-371738-m03 ...
	I0420 00:21:37.354018   99799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:37.354062   99799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:37.371545   99799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36785
	I0420 00:21:37.372055   99799 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:37.372642   99799 main.go:141] libmachine: Using API Version  1
	I0420 00:21:37.372672   99799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:37.372978   99799 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:37.373184   99799 main.go:141] libmachine: (ha-371738-m03) Calling .GetState
	I0420 00:21:37.374827   99799 status.go:330] ha-371738-m03 host status = "Running" (err=<nil>)
	I0420 00:21:37.374845   99799 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:21:37.375254   99799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:37.375303   99799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:37.389852   99799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33293
	I0420 00:21:37.390347   99799 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:37.390806   99799 main.go:141] libmachine: Using API Version  1
	I0420 00:21:37.390827   99799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:37.391230   99799 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:37.391431   99799 main.go:141] libmachine: (ha-371738-m03) Calling .GetIP
	I0420 00:21:37.394137   99799 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:37.394583   99799 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:21:37.394611   99799 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:37.394774   99799 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:21:37.395182   99799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:37.395224   99799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:37.409677   99799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36417
	I0420 00:21:37.410076   99799 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:37.410574   99799 main.go:141] libmachine: Using API Version  1
	I0420 00:21:37.410597   99799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:37.410910   99799 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:37.411111   99799 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:21:37.411319   99799 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:37.411342   99799 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:21:37.414098   99799 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:37.414519   99799 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:21:37.414548   99799 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:37.414699   99799 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:21:37.414892   99799 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:21:37.415085   99799 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:21:37.415242   99799 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:21:37.500082   99799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:37.518236   99799 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:21:37.518276   99799 api_server.go:166] Checking apiserver status ...
	I0420 00:21:37.518319   99799 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:21:37.533247   99799 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0420 00:21:37.543670   99799 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:21:37.543726   99799 ssh_runner.go:195] Run: ls
	I0420 00:21:37.548950   99799 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:21:37.556608   99799 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:21:37.556641   99799 status.go:422] ha-371738-m03 apiserver status = Running (err=<nil>)
	I0420 00:21:37.556653   99799 status.go:257] ha-371738-m03 status: &{Name:ha-371738-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:21:37.556672   99799 status.go:255] checking status of ha-371738-m04 ...
	I0420 00:21:37.557121   99799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:37.557191   99799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:37.572911   99799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36965
	I0420 00:21:37.573345   99799 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:37.573797   99799 main.go:141] libmachine: Using API Version  1
	I0420 00:21:37.573819   99799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:37.574125   99799 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:37.574323   99799 main.go:141] libmachine: (ha-371738-m04) Calling .GetState
	I0420 00:21:37.575867   99799 status.go:330] ha-371738-m04 host status = "Running" (err=<nil>)
	I0420 00:21:37.575883   99799 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:21:37.576147   99799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:37.576182   99799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:37.591157   99799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41309
	I0420 00:21:37.591542   99799 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:37.592004   99799 main.go:141] libmachine: Using API Version  1
	I0420 00:21:37.592030   99799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:37.592345   99799 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:37.592582   99799 main.go:141] libmachine: (ha-371738-m04) Calling .GetIP
	I0420 00:21:37.595452   99799 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:37.595937   99799 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:21:37.595973   99799 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:37.596131   99799 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:21:37.596493   99799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:37.596528   99799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:37.612232   99799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38363
	I0420 00:21:37.612696   99799 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:37.613198   99799 main.go:141] libmachine: Using API Version  1
	I0420 00:21:37.613215   99799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:37.613604   99799 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:37.613783   99799 main.go:141] libmachine: (ha-371738-m04) Calling .DriverName
	I0420 00:21:37.613993   99799 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:37.614010   99799 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHHostname
	I0420 00:21:37.616975   99799 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:37.617444   99799 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:21:37.617468   99799 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:37.617627   99799 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHPort
	I0420 00:21:37.617786   99799 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHKeyPath
	I0420 00:21:37.617914   99799 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHUsername
	I0420 00:21:37.618041   99799 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m04/id_rsa Username:docker}
	I0420 00:21:37.697754   99799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:37.712862   99799 status.go:257] ha-371738-m04 status: &{Name:ha-371738-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr: exit status 3 (3.753910759s)

                                                
                                                
-- stdout --
	ha-371738
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-371738-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 00:21:44.960549   99916 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:21:44.960799   99916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:21:44.960808   99916 out.go:304] Setting ErrFile to fd 2...
	I0420 00:21:44.960812   99916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:21:44.961001   99916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:21:44.961173   99916 out.go:298] Setting JSON to false
	I0420 00:21:44.961196   99916 mustload.go:65] Loading cluster: ha-371738
	I0420 00:21:44.961304   99916 notify.go:220] Checking for updates...
	I0420 00:21:44.961614   99916 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:21:44.961632   99916 status.go:255] checking status of ha-371738 ...
	I0420 00:21:44.962067   99916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:44.962143   99916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:44.977012   99916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41369
	I0420 00:21:44.977372   99916 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:44.977886   99916 main.go:141] libmachine: Using API Version  1
	I0420 00:21:44.977914   99916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:44.978215   99916 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:44.978432   99916 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:21:44.979890   99916 status.go:330] ha-371738 host status = "Running" (err=<nil>)
	I0420 00:21:44.979912   99916 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:21:44.980170   99916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:44.980203   99916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:44.995096   99916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35681
	I0420 00:21:44.995541   99916 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:44.996001   99916 main.go:141] libmachine: Using API Version  1
	I0420 00:21:44.996039   99916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:44.996336   99916 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:44.996515   99916 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:21:44.998913   99916 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:44.999267   99916 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:21:44.999292   99916 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:44.999449   99916 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:21:44.999729   99916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:44.999783   99916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:45.014614   99916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44893
	I0420 00:21:45.015021   99916 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:45.015556   99916 main.go:141] libmachine: Using API Version  1
	I0420 00:21:45.015584   99916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:45.015908   99916 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:45.016127   99916 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:21:45.016357   99916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:45.016381   99916 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:21:45.018882   99916 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:45.019256   99916 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:21:45.019290   99916 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:45.019378   99916 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:21:45.019564   99916 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:21:45.019711   99916 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:21:45.019851   99916 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:21:45.105863   99916 ssh_runner.go:195] Run: systemctl --version
	I0420 00:21:45.112745   99916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:45.133608   99916 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:21:45.133637   99916 api_server.go:166] Checking apiserver status ...
	I0420 00:21:45.133685   99916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:21:45.155690   99916 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup
	W0420 00:21:45.169117   99916 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:21:45.169194   99916 ssh_runner.go:195] Run: ls
	I0420 00:21:45.174582   99916 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:21:45.179038   99916 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:21:45.179060   99916 status.go:422] ha-371738 apiserver status = Running (err=<nil>)
	I0420 00:21:45.179070   99916 status.go:257] ha-371738 status: &{Name:ha-371738 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:21:45.179093   99916 status.go:255] checking status of ha-371738-m02 ...
	I0420 00:21:45.179468   99916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:45.179507   99916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:45.194753   99916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40283
	I0420 00:21:45.195206   99916 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:45.195720   99916 main.go:141] libmachine: Using API Version  1
	I0420 00:21:45.195741   99916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:45.196044   99916 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:45.196248   99916 main.go:141] libmachine: (ha-371738-m02) Calling .GetState
	I0420 00:21:45.197899   99916 status.go:330] ha-371738-m02 host status = "Running" (err=<nil>)
	I0420 00:21:45.197914   99916 host.go:66] Checking if "ha-371738-m02" exists ...
	I0420 00:21:45.198183   99916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:45.198215   99916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:45.212999   99916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38449
	I0420 00:21:45.213447   99916 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:45.213919   99916 main.go:141] libmachine: Using API Version  1
	I0420 00:21:45.213938   99916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:45.214253   99916 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:45.214527   99916 main.go:141] libmachine: (ha-371738-m02) Calling .GetIP
	I0420 00:21:45.217523   99916 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:45.217968   99916 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:21:45.217999   99916 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:45.218163   99916 host.go:66] Checking if "ha-371738-m02" exists ...
	I0420 00:21:45.218485   99916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:45.218516   99916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:45.232648   99916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38187
	I0420 00:21:45.233003   99916 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:45.233427   99916 main.go:141] libmachine: Using API Version  1
	I0420 00:21:45.233452   99916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:45.233712   99916 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:45.233871   99916 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:21:45.234013   99916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:45.234030   99916 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:21:45.236602   99916 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:45.236994   99916 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:21:45.237019   99916 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:21:45.237384   99916 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:21:45.238691   99916 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:21:45.239001   99916 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:21:45.239243   99916 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa Username:docker}
	W0420 00:21:48.297558   99916 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.48:22: connect: no route to host
	W0420 00:21:48.297709   99916 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	E0420 00:21:48.297737   99916 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	I0420 00:21:48.297749   99916 status.go:257] ha-371738-m02 status: &{Name:ha-371738-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0420 00:21:48.297775   99916 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	I0420 00:21:48.297787   99916 status.go:255] checking status of ha-371738-m03 ...
	I0420 00:21:48.298124   99916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:48.298184   99916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:48.313348   99916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40119
	I0420 00:21:48.313799   99916 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:48.314255   99916 main.go:141] libmachine: Using API Version  1
	I0420 00:21:48.314275   99916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:48.314577   99916 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:48.314777   99916 main.go:141] libmachine: (ha-371738-m03) Calling .GetState
	I0420 00:21:48.316575   99916 status.go:330] ha-371738-m03 host status = "Running" (err=<nil>)
	I0420 00:21:48.316591   99916 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:21:48.316883   99916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:48.316926   99916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:48.331493   99916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39701
	I0420 00:21:48.332137   99916 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:48.332632   99916 main.go:141] libmachine: Using API Version  1
	I0420 00:21:48.332660   99916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:48.333099   99916 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:48.333350   99916 main.go:141] libmachine: (ha-371738-m03) Calling .GetIP
	I0420 00:21:48.336041   99916 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:48.336565   99916 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:21:48.336599   99916 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:48.336754   99916 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:21:48.337161   99916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:48.337206   99916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:48.352576   99916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41117
	I0420 00:21:48.352997   99916 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:48.353451   99916 main.go:141] libmachine: Using API Version  1
	I0420 00:21:48.353482   99916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:48.353784   99916 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:48.353954   99916 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:21:48.354149   99916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:48.354172   99916 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:21:48.356727   99916 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:48.357125   99916 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:21:48.357149   99916 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:48.357331   99916 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:21:48.357509   99916 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:21:48.357680   99916 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:21:48.357814   99916 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:21:48.441689   99916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:48.459081   99916 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:21:48.459115   99916 api_server.go:166] Checking apiserver status ...
	I0420 00:21:48.459146   99916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:21:48.474408   99916 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0420 00:21:48.488497   99916 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:21:48.488548   99916 ssh_runner.go:195] Run: ls
	I0420 00:21:48.493648   99916 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:21:48.500001   99916 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:21:48.500021   99916 status.go:422] ha-371738-m03 apiserver status = Running (err=<nil>)
	I0420 00:21:48.500030   99916 status.go:257] ha-371738-m03 status: &{Name:ha-371738-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:21:48.500043   99916 status.go:255] checking status of ha-371738-m04 ...
	I0420 00:21:48.500371   99916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:48.500409   99916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:48.516949   99916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I0420 00:21:48.517446   99916 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:48.517950   99916 main.go:141] libmachine: Using API Version  1
	I0420 00:21:48.517978   99916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:48.518320   99916 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:48.518516   99916 main.go:141] libmachine: (ha-371738-m04) Calling .GetState
	I0420 00:21:48.520061   99916 status.go:330] ha-371738-m04 host status = "Running" (err=<nil>)
	I0420 00:21:48.520077   99916 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:21:48.520331   99916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:48.520364   99916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:48.535036   99916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39491
	I0420 00:21:48.535386   99916 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:48.535841   99916 main.go:141] libmachine: Using API Version  1
	I0420 00:21:48.535867   99916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:48.536168   99916 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:48.536333   99916 main.go:141] libmachine: (ha-371738-m04) Calling .GetIP
	I0420 00:21:48.538728   99916 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:48.539146   99916 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:21:48.539176   99916 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:48.539306   99916 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:21:48.539596   99916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:48.539630   99916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:48.554259   99916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40187
	I0420 00:21:48.554678   99916 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:48.555198   99916 main.go:141] libmachine: Using API Version  1
	I0420 00:21:48.555218   99916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:48.555496   99916 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:48.555682   99916 main.go:141] libmachine: (ha-371738-m04) Calling .DriverName
	I0420 00:21:48.555919   99916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:48.555944   99916 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHHostname
	I0420 00:21:48.558878   99916 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:48.559431   99916 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:21:48.559463   99916 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:48.559655   99916 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHPort
	I0420 00:21:48.559839   99916 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHKeyPath
	I0420 00:21:48.560033   99916 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHUsername
	I0420 00:21:48.560293   99916 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m04/id_rsa Username:docker}
	I0420 00:21:48.641872   99916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:48.657287   99916 status.go:257] ha-371738-m04 status: &{Name:ha-371738-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr: exit status 7 (652.209639ms)

                                                
                                                
-- stdout --
	ha-371738
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-371738-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 00:21:56.537399  100069 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:21:56.537502  100069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:21:56.537514  100069 out.go:304] Setting ErrFile to fd 2...
	I0420 00:21:56.537518  100069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:21:56.538056  100069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:21:56.538353  100069 out.go:298] Setting JSON to false
	I0420 00:21:56.538390  100069 mustload.go:65] Loading cluster: ha-371738
	I0420 00:21:56.538746  100069 notify.go:220] Checking for updates...
	I0420 00:21:56.539241  100069 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:21:56.539273  100069 status.go:255] checking status of ha-371738 ...
	I0420 00:21:56.539695  100069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:56.539751  100069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:56.555753  100069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43083
	I0420 00:21:56.556209  100069 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:56.556864  100069 main.go:141] libmachine: Using API Version  1
	I0420 00:21:56.556893  100069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:56.557520  100069 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:56.557754  100069 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:21:56.559682  100069 status.go:330] ha-371738 host status = "Running" (err=<nil>)
	I0420 00:21:56.559710  100069 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:21:56.559989  100069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:56.560030  100069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:56.574772  100069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0420 00:21:56.575132  100069 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:56.575507  100069 main.go:141] libmachine: Using API Version  1
	I0420 00:21:56.575529  100069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:56.575800  100069 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:56.576001  100069 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:21:56.578324  100069 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:56.578709  100069 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:21:56.578740  100069 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:56.578880  100069 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:21:56.579183  100069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:56.579222  100069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:56.593218  100069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35077
	I0420 00:21:56.593638  100069 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:56.594123  100069 main.go:141] libmachine: Using API Version  1
	I0420 00:21:56.594152  100069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:56.594457  100069 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:56.594618  100069 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:21:56.594802  100069 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:56.594830  100069 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:21:56.597886  100069 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:56.598349  100069 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:21:56.598369  100069 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:21:56.598512  100069 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:21:56.598676  100069 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:21:56.598825  100069 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:21:56.598950  100069 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:21:56.702818  100069 ssh_runner.go:195] Run: systemctl --version
	I0420 00:21:56.710191  100069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:56.727125  100069 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:21:56.727159  100069 api_server.go:166] Checking apiserver status ...
	I0420 00:21:56.727190  100069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:21:56.744835  100069 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup
	W0420 00:21:56.757902  100069 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:21:56.757958  100069 ssh_runner.go:195] Run: ls
	I0420 00:21:56.763746  100069 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:21:56.768519  100069 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:21:56.768547  100069 status.go:422] ha-371738 apiserver status = Running (err=<nil>)
	I0420 00:21:56.768569  100069 status.go:257] ha-371738 status: &{Name:ha-371738 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:21:56.768605  100069 status.go:255] checking status of ha-371738-m02 ...
	I0420 00:21:56.768903  100069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:56.768949  100069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:56.783641  100069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46345
	I0420 00:21:56.784056  100069 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:56.784560  100069 main.go:141] libmachine: Using API Version  1
	I0420 00:21:56.784585  100069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:56.784899  100069 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:56.785161  100069 main.go:141] libmachine: (ha-371738-m02) Calling .GetState
	I0420 00:21:56.786753  100069 status.go:330] ha-371738-m02 host status = "Stopped" (err=<nil>)
	I0420 00:21:56.786770  100069 status.go:343] host is not running, skipping remaining checks
	I0420 00:21:56.786778  100069 status.go:257] ha-371738-m02 status: &{Name:ha-371738-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:21:56.786798  100069 status.go:255] checking status of ha-371738-m03 ...
	I0420 00:21:56.787141  100069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:56.787192  100069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:56.801853  100069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I0420 00:21:56.802266  100069 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:56.802761  100069 main.go:141] libmachine: Using API Version  1
	I0420 00:21:56.802783  100069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:56.803189  100069 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:56.803405  100069 main.go:141] libmachine: (ha-371738-m03) Calling .GetState
	I0420 00:21:56.805209  100069 status.go:330] ha-371738-m03 host status = "Running" (err=<nil>)
	I0420 00:21:56.805228  100069 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:21:56.805538  100069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:56.805573  100069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:56.820249  100069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44493
	I0420 00:21:56.820631  100069 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:56.821052  100069 main.go:141] libmachine: Using API Version  1
	I0420 00:21:56.821073  100069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:56.821387  100069 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:56.821595  100069 main.go:141] libmachine: (ha-371738-m03) Calling .GetIP
	I0420 00:21:56.824279  100069 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:56.824784  100069 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:21:56.824823  100069 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:56.824939  100069 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:21:56.825299  100069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:56.825355  100069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:56.839696  100069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0420 00:21:56.840068  100069 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:56.840512  100069 main.go:141] libmachine: Using API Version  1
	I0420 00:21:56.840531  100069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:56.840860  100069 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:56.841063  100069 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:21:56.841251  100069 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:56.841273  100069 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:21:56.843992  100069 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:56.844382  100069 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:21:56.844410  100069 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:21:56.844537  100069 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:21:56.844713  100069 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:21:56.844936  100069 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:21:56.845092  100069 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:21:56.923405  100069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:56.942959  100069 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:21:56.942992  100069 api_server.go:166] Checking apiserver status ...
	I0420 00:21:56.943042  100069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:21:56.958489  100069 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0420 00:21:56.970302  100069 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:21:56.970351  100069 ssh_runner.go:195] Run: ls
	I0420 00:21:56.976091  100069 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:21:56.980605  100069 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:21:56.980634  100069 status.go:422] ha-371738-m03 apiserver status = Running (err=<nil>)
	I0420 00:21:56.980645  100069 status.go:257] ha-371738-m03 status: &{Name:ha-371738-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:21:56.980664  100069 status.go:255] checking status of ha-371738-m04 ...
	I0420 00:21:56.981046  100069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:56.981100  100069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:56.996137  100069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0420 00:21:56.996494  100069 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:56.997053  100069 main.go:141] libmachine: Using API Version  1
	I0420 00:21:56.997078  100069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:56.997439  100069 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:56.997665  100069 main.go:141] libmachine: (ha-371738-m04) Calling .GetState
	I0420 00:21:56.999287  100069 status.go:330] ha-371738-m04 host status = "Running" (err=<nil>)
	I0420 00:21:56.999308  100069 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:21:56.999624  100069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:56.999661  100069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:57.013677  100069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34135
	I0420 00:21:57.014073  100069 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:57.014483  100069 main.go:141] libmachine: Using API Version  1
	I0420 00:21:57.014505  100069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:57.014816  100069 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:57.015018  100069 main.go:141] libmachine: (ha-371738-m04) Calling .GetIP
	I0420 00:21:57.017675  100069 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:57.018136  100069 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:21:57.018162  100069 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:57.018364  100069 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:21:57.018634  100069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:21:57.018695  100069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:21:57.032755  100069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43949
	I0420 00:21:57.033145  100069 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:21:57.033649  100069 main.go:141] libmachine: Using API Version  1
	I0420 00:21:57.033669  100069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:21:57.033964  100069 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:21:57.034143  100069 main.go:141] libmachine: (ha-371738-m04) Calling .DriverName
	I0420 00:21:57.034322  100069 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:21:57.034343  100069 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHHostname
	I0420 00:21:57.036937  100069 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:57.037413  100069 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:21:57.037442  100069 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:21:57.037529  100069 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHPort
	I0420 00:21:57.037707  100069 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHKeyPath
	I0420 00:21:57.037854  100069 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHUsername
	I0420 00:21:57.038027  100069 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m04/id_rsa Username:docker}
	I0420 00:21:57.117507  100069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:21:57.132175  100069 status.go:257] ha-371738-m04 status: &{Name:ha-371738-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr: exit status 7 (642.492644ms)

                                                
                                                
-- stdout --
	ha-371738
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-371738-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 00:22:02.988300  100156 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:22:02.988605  100156 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:22:02.988616  100156 out.go:304] Setting ErrFile to fd 2...
	I0420 00:22:02.988620  100156 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:22:02.988802  100156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:22:02.988969  100156 out.go:298] Setting JSON to false
	I0420 00:22:02.988999  100156 mustload.go:65] Loading cluster: ha-371738
	I0420 00:22:02.989051  100156 notify.go:220] Checking for updates...
	I0420 00:22:02.989386  100156 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:22:02.989403  100156 status.go:255] checking status of ha-371738 ...
	I0420 00:22:02.989826  100156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:22:02.989887  100156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:22:03.011416  100156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45151
	I0420 00:22:03.011966  100156 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:22:03.012629  100156 main.go:141] libmachine: Using API Version  1
	I0420 00:22:03.012651  100156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:22:03.012960  100156 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:22:03.013165  100156 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:22:03.014889  100156 status.go:330] ha-371738 host status = "Running" (err=<nil>)
	I0420 00:22:03.014910  100156 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:22:03.015315  100156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:22:03.015396  100156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:22:03.030214  100156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36891
	I0420 00:22:03.030714  100156 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:22:03.031342  100156 main.go:141] libmachine: Using API Version  1
	I0420 00:22:03.031374  100156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:22:03.031703  100156 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:22:03.031900  100156 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:22:03.034727  100156 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:22:03.035082  100156 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:22:03.035103  100156 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:22:03.035256  100156 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:22:03.035525  100156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:22:03.035557  100156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:22:03.051581  100156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45135
	I0420 00:22:03.051977  100156 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:22:03.052405  100156 main.go:141] libmachine: Using API Version  1
	I0420 00:22:03.052428  100156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:22:03.052703  100156 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:22:03.052910  100156 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:22:03.053122  100156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:22:03.053144  100156 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:22:03.055556  100156 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:22:03.055953  100156 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:22:03.055985  100156 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:22:03.056108  100156 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:22:03.056278  100156 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:22:03.056419  100156 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:22:03.056577  100156 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:22:03.142649  100156 ssh_runner.go:195] Run: systemctl --version
	I0420 00:22:03.150375  100156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:22:03.167552  100156 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:22:03.167584  100156 api_server.go:166] Checking apiserver status ...
	I0420 00:22:03.167651  100156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:22:03.183070  100156 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup
	W0420 00:22:03.194651  100156 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1150/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:22:03.194696  100156 ssh_runner.go:195] Run: ls
	I0420 00:22:03.199487  100156 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:22:03.205700  100156 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:22:03.205727  100156 status.go:422] ha-371738 apiserver status = Running (err=<nil>)
	I0420 00:22:03.205740  100156 status.go:257] ha-371738 status: &{Name:ha-371738 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:22:03.205756  100156 status.go:255] checking status of ha-371738-m02 ...
	I0420 00:22:03.206027  100156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:22:03.206063  100156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:22:03.220794  100156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33645
	I0420 00:22:03.221274  100156 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:22:03.221823  100156 main.go:141] libmachine: Using API Version  1
	I0420 00:22:03.221851  100156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:22:03.222191  100156 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:22:03.222439  100156 main.go:141] libmachine: (ha-371738-m02) Calling .GetState
	I0420 00:22:03.224179  100156 status.go:330] ha-371738-m02 host status = "Stopped" (err=<nil>)
	I0420 00:22:03.224194  100156 status.go:343] host is not running, skipping remaining checks
	I0420 00:22:03.224201  100156 status.go:257] ha-371738-m02 status: &{Name:ha-371738-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:22:03.224223  100156 status.go:255] checking status of ha-371738-m03 ...
	I0420 00:22:03.224524  100156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:22:03.224571  100156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:22:03.239315  100156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43537
	I0420 00:22:03.239783  100156 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:22:03.240231  100156 main.go:141] libmachine: Using API Version  1
	I0420 00:22:03.240256  100156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:22:03.240589  100156 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:22:03.240780  100156 main.go:141] libmachine: (ha-371738-m03) Calling .GetState
	I0420 00:22:03.242448  100156 status.go:330] ha-371738-m03 host status = "Running" (err=<nil>)
	I0420 00:22:03.242466  100156 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:22:03.242742  100156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:22:03.242774  100156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:22:03.257055  100156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0420 00:22:03.257503  100156 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:22:03.257976  100156 main.go:141] libmachine: Using API Version  1
	I0420 00:22:03.258000  100156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:22:03.258344  100156 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:22:03.258553  100156 main.go:141] libmachine: (ha-371738-m03) Calling .GetIP
	I0420 00:22:03.261376  100156 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:22:03.261860  100156 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:22:03.261883  100156 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:22:03.262004  100156 host.go:66] Checking if "ha-371738-m03" exists ...
	I0420 00:22:03.262334  100156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:22:03.262392  100156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:22:03.277331  100156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42657
	I0420 00:22:03.277796  100156 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:22:03.278256  100156 main.go:141] libmachine: Using API Version  1
	I0420 00:22:03.278282  100156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:22:03.278592  100156 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:22:03.278769  100156 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:22:03.278992  100156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:22:03.279012  100156 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:22:03.281898  100156 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:22:03.282343  100156 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:22:03.282373  100156 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:22:03.282529  100156 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:22:03.282693  100156 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:22:03.282815  100156 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:22:03.282958  100156 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:22:03.361888  100156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:22:03.379390  100156 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:22:03.379422  100156 api_server.go:166] Checking apiserver status ...
	I0420 00:22:03.379473  100156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:22:03.394673  100156 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0420 00:22:03.405361  100156 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:22:03.405421  100156 ssh_runner.go:195] Run: ls
	I0420 00:22:03.410419  100156 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:22:03.415176  100156 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:22:03.415196  100156 status.go:422] ha-371738-m03 apiserver status = Running (err=<nil>)
	I0420 00:22:03.415205  100156 status.go:257] ha-371738-m03 status: &{Name:ha-371738-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:22:03.415220  100156 status.go:255] checking status of ha-371738-m04 ...
	I0420 00:22:03.415518  100156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:22:03.415551  100156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:22:03.430223  100156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44287
	I0420 00:22:03.430651  100156 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:22:03.431236  100156 main.go:141] libmachine: Using API Version  1
	I0420 00:22:03.431259  100156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:22:03.431655  100156 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:22:03.431893  100156 main.go:141] libmachine: (ha-371738-m04) Calling .GetState
	I0420 00:22:03.433550  100156 status.go:330] ha-371738-m04 host status = "Running" (err=<nil>)
	I0420 00:22:03.433575  100156 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:22:03.433846  100156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:22:03.433880  100156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:22:03.448428  100156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43437
	I0420 00:22:03.448899  100156 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:22:03.449408  100156 main.go:141] libmachine: Using API Version  1
	I0420 00:22:03.449432  100156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:22:03.449807  100156 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:22:03.449996  100156 main.go:141] libmachine: (ha-371738-m04) Calling .GetIP
	I0420 00:22:03.452809  100156 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:22:03.453208  100156 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:22:03.453245  100156 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:22:03.453298  100156 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:22:03.453597  100156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:22:03.453635  100156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:22:03.468778  100156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43377
	I0420 00:22:03.469183  100156 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:22:03.469658  100156 main.go:141] libmachine: Using API Version  1
	I0420 00:22:03.469677  100156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:22:03.469995  100156 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:22:03.470186  100156 main.go:141] libmachine: (ha-371738-m04) Calling .DriverName
	I0420 00:22:03.470345  100156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:22:03.470366  100156 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHHostname
	I0420 00:22:03.473070  100156 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:22:03.473511  100156 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:22:03.473550  100156 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:22:03.473693  100156 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHPort
	I0420 00:22:03.473841  100156 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHKeyPath
	I0420 00:22:03.473956  100156 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHUsername
	I0420 00:22:03.474089  100156 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m04/id_rsa Username:docker}
	I0420 00:22:03.554385  100156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:22:03.570114  100156 status.go:257] ha-371738-m04 status: &{Name:ha-371738-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-371738 -n ha-371738
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-371738 logs -n 25: (1.479529812s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m03:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738:/home/docker/cp-test_ha-371738-m03_ha-371738.txt                       |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738 sudo cat                                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m03_ha-371738.txt                                 |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m03:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m02:/home/docker/cp-test_ha-371738-m03_ha-371738-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738-m02 sudo cat                                          | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m03_ha-371738-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m03:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04:/home/docker/cp-test_ha-371738-m03_ha-371738-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738-m04 sudo cat                                          | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m03_ha-371738-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-371738 cp testdata/cp-test.txt                                                | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3122242891/001/cp-test_ha-371738-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738:/home/docker/cp-test_ha-371738-m04_ha-371738.txt                       |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738 sudo cat                                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m04_ha-371738.txt                                 |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m02:/home/docker/cp-test_ha-371738-m04_ha-371738-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738-m02 sudo cat                                          | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m04_ha-371738-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m03:/home/docker/cp-test_ha-371738-m04_ha-371738-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738-m03 sudo cat                                          | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m04_ha-371738-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-371738 node stop m02 -v=7                                                     | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-371738 node start m02 -v=7                                                    | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 00:14:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 00:14:10.236871   94171 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:14:10.237002   94171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:14:10.237012   94171 out.go:304] Setting ErrFile to fd 2...
	I0420 00:14:10.237017   94171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:14:10.237224   94171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:14:10.237860   94171 out.go:298] Setting JSON to false
	I0420 00:14:10.238805   94171 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":10597,"bootTime":1713561453,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 00:14:10.238866   94171 start.go:139] virtualization: kvm guest
	I0420 00:14:10.241171   94171 out.go:177] * [ha-371738] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 00:14:10.242724   94171 notify.go:220] Checking for updates...
	I0420 00:14:10.242772   94171 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 00:14:10.244171   94171 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 00:14:10.245616   94171 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 00:14:10.246951   94171 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:14:10.248202   94171 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 00:14:10.249410   94171 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 00:14:10.250695   94171 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 00:14:10.290457   94171 out.go:177] * Using the kvm2 driver based on user configuration
	I0420 00:14:10.291763   94171 start.go:297] selected driver: kvm2
	I0420 00:14:10.291777   94171 start.go:901] validating driver "kvm2" against <nil>
	I0420 00:14:10.291792   94171 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 00:14:10.292734   94171 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 00:14:10.292815   94171 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 00:14:10.307519   94171 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 00:14:10.307559   94171 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0420 00:14:10.307767   94171 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:14:10.307831   94171 cni.go:84] Creating CNI manager for ""
	I0420 00:14:10.307843   94171 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0420 00:14:10.307851   94171 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0420 00:14:10.307907   94171 start.go:340] cluster config:
	{Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni
FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I0420 00:14:10.308008   94171 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 00:14:10.309909   94171 out.go:177] * Starting "ha-371738" primary control-plane node in "ha-371738" cluster
	I0420 00:14:10.311299   94171 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:14:10.311327   94171 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0420 00:14:10.311335   94171 cache.go:56] Caching tarball of preloaded images
	I0420 00:14:10.311410   94171 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 00:14:10.311421   94171 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 00:14:10.311726   94171 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:14:10.311748   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json: {Name:mkbaaf47d21f09ecf6d9895217ef92a775501247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:14:10.311881   94171 start.go:360] acquireMachinesLock for ha-371738: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 00:14:10.311907   94171 start.go:364] duration metric: took 14.266µs to acquireMachinesLock for "ha-371738"
	I0420 00:14:10.311921   94171 start.go:93] Provisioning new machine with config: &{Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:defau
lt APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:14:10.311970   94171 start.go:125] createHost starting for "" (driver="kvm2")
	I0420 00:14:10.313590   94171 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0420 00:14:10.314213   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:14:10.314264   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:14:10.329153   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35983
	I0420 00:14:10.329627   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:14:10.330168   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:14:10.330203   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:14:10.330554   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:14:10.330794   94171 main.go:141] libmachine: (ha-371738) Calling .GetMachineName
	I0420 00:14:10.330955   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:14:10.331131   94171 start.go:159] libmachine.API.Create for "ha-371738" (driver="kvm2")
	I0420 00:14:10.331163   94171 client.go:168] LocalClient.Create starting
	I0420 00:14:10.331198   94171 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem
	I0420 00:14:10.331235   94171 main.go:141] libmachine: Decoding PEM data...
	I0420 00:14:10.331255   94171 main.go:141] libmachine: Parsing certificate...
	I0420 00:14:10.331319   94171 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem
	I0420 00:14:10.331344   94171 main.go:141] libmachine: Decoding PEM data...
	I0420 00:14:10.331358   94171 main.go:141] libmachine: Parsing certificate...
	I0420 00:14:10.331384   94171 main.go:141] libmachine: Running pre-create checks...
	I0420 00:14:10.331396   94171 main.go:141] libmachine: (ha-371738) Calling .PreCreateCheck
	I0420 00:14:10.331708   94171 main.go:141] libmachine: (ha-371738) Calling .GetConfigRaw
	I0420 00:14:10.332106   94171 main.go:141] libmachine: Creating machine...
	I0420 00:14:10.332123   94171 main.go:141] libmachine: (ha-371738) Calling .Create
	I0420 00:14:10.332255   94171 main.go:141] libmachine: (ha-371738) Creating KVM machine...
	I0420 00:14:10.333707   94171 main.go:141] libmachine: (ha-371738) DBG | found existing default KVM network
	I0420 00:14:10.334370   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:10.334229   94195 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015320}
	I0420 00:14:10.334396   94171 main.go:141] libmachine: (ha-371738) DBG | created network xml: 
	I0420 00:14:10.334411   94171 main.go:141] libmachine: (ha-371738) DBG | <network>
	I0420 00:14:10.334426   94171 main.go:141] libmachine: (ha-371738) DBG |   <name>mk-ha-371738</name>
	I0420 00:14:10.334488   94171 main.go:141] libmachine: (ha-371738) DBG |   <dns enable='no'/>
	I0420 00:14:10.334517   94171 main.go:141] libmachine: (ha-371738) DBG |   
	I0420 00:14:10.334531   94171 main.go:141] libmachine: (ha-371738) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0420 00:14:10.334543   94171 main.go:141] libmachine: (ha-371738) DBG |     <dhcp>
	I0420 00:14:10.334555   94171 main.go:141] libmachine: (ha-371738) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0420 00:14:10.334572   94171 main.go:141] libmachine: (ha-371738) DBG |     </dhcp>
	I0420 00:14:10.334582   94171 main.go:141] libmachine: (ha-371738) DBG |   </ip>
	I0420 00:14:10.334593   94171 main.go:141] libmachine: (ha-371738) DBG |   
	I0420 00:14:10.334603   94171 main.go:141] libmachine: (ha-371738) DBG | </network>
	I0420 00:14:10.334613   94171 main.go:141] libmachine: (ha-371738) DBG | 
	I0420 00:14:10.339367   94171 main.go:141] libmachine: (ha-371738) DBG | trying to create private KVM network mk-ha-371738 192.168.39.0/24...
	I0420 00:14:10.401514   94171 main.go:141] libmachine: (ha-371738) DBG | private KVM network mk-ha-371738 192.168.39.0/24 created
	I0420 00:14:10.401566   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:10.401466   94195 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:14:10.401584   94171 main.go:141] libmachine: (ha-371738) Setting up store path in /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738 ...
	I0420 00:14:10.401612   94171 main.go:141] libmachine: (ha-371738) Building disk image from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0420 00:14:10.401633   94171 main.go:141] libmachine: (ha-371738) Downloading /home/jenkins/minikube-integration/18703-76456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0420 00:14:10.637507   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:10.637346   94195 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa...
	I0420 00:14:10.807033   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:10.806897   94195 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/ha-371738.rawdisk...
	I0420 00:14:10.807072   94171 main.go:141] libmachine: (ha-371738) DBG | Writing magic tar header
	I0420 00:14:10.807082   94171 main.go:141] libmachine: (ha-371738) DBG | Writing SSH key tar header
	I0420 00:14:10.807090   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:10.807040   94195 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738 ...
	I0420 00:14:10.807241   94171 main.go:141] libmachine: (ha-371738) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738
	I0420 00:14:10.807277   94171 main.go:141] libmachine: (ha-371738) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines
	I0420 00:14:10.807298   94171 main.go:141] libmachine: (ha-371738) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738 (perms=drwx------)
	I0420 00:14:10.807332   94171 main.go:141] libmachine: (ha-371738) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines (perms=drwxr-xr-x)
	I0420 00:14:10.807349   94171 main.go:141] libmachine: (ha-371738) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube (perms=drwxr-xr-x)
	I0420 00:14:10.807361   94171 main.go:141] libmachine: (ha-371738) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:14:10.807377   94171 main.go:141] libmachine: (ha-371738) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456
	I0420 00:14:10.807387   94171 main.go:141] libmachine: (ha-371738) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456 (perms=drwxrwxr-x)
	I0420 00:14:10.807404   94171 main.go:141] libmachine: (ha-371738) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0420 00:14:10.807419   94171 main.go:141] libmachine: (ha-371738) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0420 00:14:10.807432   94171 main.go:141] libmachine: (ha-371738) Creating domain...
	I0420 00:14:10.807446   94171 main.go:141] libmachine: (ha-371738) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0420 00:14:10.807463   94171 main.go:141] libmachine: (ha-371738) DBG | Checking permissions on dir: /home/jenkins
	I0420 00:14:10.807482   94171 main.go:141] libmachine: (ha-371738) DBG | Checking permissions on dir: /home
	I0420 00:14:10.807499   94171 main.go:141] libmachine: (ha-371738) DBG | Skipping /home - not owner
	I0420 00:14:10.808598   94171 main.go:141] libmachine: (ha-371738) define libvirt domain using xml: 
	I0420 00:14:10.808621   94171 main.go:141] libmachine: (ha-371738) <domain type='kvm'>
	I0420 00:14:10.808627   94171 main.go:141] libmachine: (ha-371738)   <name>ha-371738</name>
	I0420 00:14:10.808632   94171 main.go:141] libmachine: (ha-371738)   <memory unit='MiB'>2200</memory>
	I0420 00:14:10.808637   94171 main.go:141] libmachine: (ha-371738)   <vcpu>2</vcpu>
	I0420 00:14:10.808641   94171 main.go:141] libmachine: (ha-371738)   <features>
	I0420 00:14:10.808646   94171 main.go:141] libmachine: (ha-371738)     <acpi/>
	I0420 00:14:10.808650   94171 main.go:141] libmachine: (ha-371738)     <apic/>
	I0420 00:14:10.808656   94171 main.go:141] libmachine: (ha-371738)     <pae/>
	I0420 00:14:10.808665   94171 main.go:141] libmachine: (ha-371738)     
	I0420 00:14:10.808678   94171 main.go:141] libmachine: (ha-371738)   </features>
	I0420 00:14:10.808685   94171 main.go:141] libmachine: (ha-371738)   <cpu mode='host-passthrough'>
	I0420 00:14:10.808698   94171 main.go:141] libmachine: (ha-371738)   
	I0420 00:14:10.808701   94171 main.go:141] libmachine: (ha-371738)   </cpu>
	I0420 00:14:10.808706   94171 main.go:141] libmachine: (ha-371738)   <os>
	I0420 00:14:10.808711   94171 main.go:141] libmachine: (ha-371738)     <type>hvm</type>
	I0420 00:14:10.808718   94171 main.go:141] libmachine: (ha-371738)     <boot dev='cdrom'/>
	I0420 00:14:10.808723   94171 main.go:141] libmachine: (ha-371738)     <boot dev='hd'/>
	I0420 00:14:10.808730   94171 main.go:141] libmachine: (ha-371738)     <bootmenu enable='no'/>
	I0420 00:14:10.808734   94171 main.go:141] libmachine: (ha-371738)   </os>
	I0420 00:14:10.808739   94171 main.go:141] libmachine: (ha-371738)   <devices>
	I0420 00:14:10.808748   94171 main.go:141] libmachine: (ha-371738)     <disk type='file' device='cdrom'>
	I0420 00:14:10.808764   94171 main.go:141] libmachine: (ha-371738)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/boot2docker.iso'/>
	I0420 00:14:10.808776   94171 main.go:141] libmachine: (ha-371738)       <target dev='hdc' bus='scsi'/>
	I0420 00:14:10.808785   94171 main.go:141] libmachine: (ha-371738)       <readonly/>
	I0420 00:14:10.808789   94171 main.go:141] libmachine: (ha-371738)     </disk>
	I0420 00:14:10.808798   94171 main.go:141] libmachine: (ha-371738)     <disk type='file' device='disk'>
	I0420 00:14:10.808803   94171 main.go:141] libmachine: (ha-371738)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0420 00:14:10.808828   94171 main.go:141] libmachine: (ha-371738)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/ha-371738.rawdisk'/>
	I0420 00:14:10.808852   94171 main.go:141] libmachine: (ha-371738)       <target dev='hda' bus='virtio'/>
	I0420 00:14:10.808876   94171 main.go:141] libmachine: (ha-371738)     </disk>
	I0420 00:14:10.808900   94171 main.go:141] libmachine: (ha-371738)     <interface type='network'>
	I0420 00:14:10.808907   94171 main.go:141] libmachine: (ha-371738)       <source network='mk-ha-371738'/>
	I0420 00:14:10.808915   94171 main.go:141] libmachine: (ha-371738)       <model type='virtio'/>
	I0420 00:14:10.808920   94171 main.go:141] libmachine: (ha-371738)     </interface>
	I0420 00:14:10.808927   94171 main.go:141] libmachine: (ha-371738)     <interface type='network'>
	I0420 00:14:10.808933   94171 main.go:141] libmachine: (ha-371738)       <source network='default'/>
	I0420 00:14:10.808940   94171 main.go:141] libmachine: (ha-371738)       <model type='virtio'/>
	I0420 00:14:10.808946   94171 main.go:141] libmachine: (ha-371738)     </interface>
	I0420 00:14:10.808953   94171 main.go:141] libmachine: (ha-371738)     <serial type='pty'>
	I0420 00:14:10.808959   94171 main.go:141] libmachine: (ha-371738)       <target port='0'/>
	I0420 00:14:10.808963   94171 main.go:141] libmachine: (ha-371738)     </serial>
	I0420 00:14:10.808971   94171 main.go:141] libmachine: (ha-371738)     <console type='pty'>
	I0420 00:14:10.808979   94171 main.go:141] libmachine: (ha-371738)       <target type='serial' port='0'/>
	I0420 00:14:10.808992   94171 main.go:141] libmachine: (ha-371738)     </console>
	I0420 00:14:10.809001   94171 main.go:141] libmachine: (ha-371738)     <rng model='virtio'>
	I0420 00:14:10.809006   94171 main.go:141] libmachine: (ha-371738)       <backend model='random'>/dev/random</backend>
	I0420 00:14:10.809014   94171 main.go:141] libmachine: (ha-371738)     </rng>
	I0420 00:14:10.809019   94171 main.go:141] libmachine: (ha-371738)     
	I0420 00:14:10.809024   94171 main.go:141] libmachine: (ha-371738)     
	I0420 00:14:10.809028   94171 main.go:141] libmachine: (ha-371738)   </devices>
	I0420 00:14:10.809037   94171 main.go:141] libmachine: (ha-371738) </domain>
	I0420 00:14:10.809041   94171 main.go:141] libmachine: (ha-371738) 
	I0420 00:14:10.813367   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:e3:54:1d in network default
	I0420 00:14:10.813989   94171 main.go:141] libmachine: (ha-371738) Ensuring networks are active...
	I0420 00:14:10.814016   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:10.814692   94171 main.go:141] libmachine: (ha-371738) Ensuring network default is active
	I0420 00:14:10.814953   94171 main.go:141] libmachine: (ha-371738) Ensuring network mk-ha-371738 is active
	I0420 00:14:10.815492   94171 main.go:141] libmachine: (ha-371738) Getting domain xml...
	I0420 00:14:10.816196   94171 main.go:141] libmachine: (ha-371738) Creating domain...
	I0420 00:14:11.986727   94171 main.go:141] libmachine: (ha-371738) Waiting to get IP...
	I0420 00:14:11.987631   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:11.988035   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:11.988065   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:11.988011   94195 retry.go:31] will retry after 281.596305ms: waiting for machine to come up
	I0420 00:14:12.271521   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:12.272054   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:12.272079   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:12.271994   94195 retry.go:31] will retry after 266.421398ms: waiting for machine to come up
	I0420 00:14:12.540481   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:12.540910   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:12.540933   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:12.540860   94195 retry.go:31] will retry after 468.333676ms: waiting for machine to come up
	I0420 00:14:13.010520   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:13.010954   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:13.010989   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:13.010899   94195 retry.go:31] will retry after 425.140611ms: waiting for machine to come up
	I0420 00:14:13.437327   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:13.437703   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:13.437726   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:13.437659   94195 retry.go:31] will retry after 690.263967ms: waiting for machine to come up
	I0420 00:14:14.129691   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:14.130084   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:14.130130   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:14.130049   94195 retry.go:31] will retry after 866.995514ms: waiting for machine to come up
	I0420 00:14:14.999183   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:14.999601   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:14.999634   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:14.999566   94195 retry.go:31] will retry after 1.051690522s: waiting for machine to come up
	I0420 00:14:16.052424   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:16.052882   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:16.052910   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:16.052841   94195 retry.go:31] will retry after 1.246619998s: waiting for machine to come up
	I0420 00:14:17.301213   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:17.301633   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:17.301658   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:17.301590   94195 retry.go:31] will retry after 1.149702229s: waiting for machine to come up
	I0420 00:14:18.452804   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:18.453380   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:18.453408   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:18.453301   94195 retry.go:31] will retry after 1.414395436s: waiting for machine to come up
	I0420 00:14:19.868875   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:19.869253   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:19.869282   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:19.869199   94195 retry.go:31] will retry after 1.780293534s: waiting for machine to come up
	I0420 00:14:21.650997   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:21.651558   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:21.651581   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:21.651520   94195 retry.go:31] will retry after 2.372257741s: waiting for machine to come up
	I0420 00:14:24.026971   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:24.027509   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:24.027536   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:24.027461   94195 retry.go:31] will retry after 4.453964445s: waiting for machine to come up
	I0420 00:14:28.485579   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:28.485921   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find current IP address of domain ha-371738 in network mk-ha-371738
	I0420 00:14:28.485974   94171 main.go:141] libmachine: (ha-371738) DBG | I0420 00:14:28.485879   94195 retry.go:31] will retry after 5.412436051s: waiting for machine to come up
	I0420 00:14:33.902220   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:33.902551   94171 main.go:141] libmachine: (ha-371738) Found IP for machine: 192.168.39.217
	I0420 00:14:33.902598   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has current primary IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:33.902620   94171 main.go:141] libmachine: (ha-371738) Reserving static IP address...
	I0420 00:14:33.902900   94171 main.go:141] libmachine: (ha-371738) DBG | unable to find host DHCP lease matching {name: "ha-371738", mac: "52:54:00:a2:22:29", ip: "192.168.39.217"} in network mk-ha-371738
	I0420 00:14:33.973862   94171 main.go:141] libmachine: (ha-371738) DBG | Getting to WaitForSSH function...
	I0420 00:14:33.973890   94171 main.go:141] libmachine: (ha-371738) Reserved static IP address: 192.168.39.217
	I0420 00:14:33.973909   94171 main.go:141] libmachine: (ha-371738) Waiting for SSH to be available...
	I0420 00:14:33.976405   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:33.976724   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:33.976750   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:33.976865   94171 main.go:141] libmachine: (ha-371738) DBG | Using SSH client type: external
	I0420 00:14:33.976888   94171 main.go:141] libmachine: (ha-371738) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa (-rw-------)
	I0420 00:14:33.976923   94171 main.go:141] libmachine: (ha-371738) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 00:14:33.976950   94171 main.go:141] libmachine: (ha-371738) DBG | About to run SSH command:
	I0420 00:14:33.976999   94171 main.go:141] libmachine: (ha-371738) DBG | exit 0
	I0420 00:14:34.105632   94171 main.go:141] libmachine: (ha-371738) DBG | SSH cmd err, output: <nil>: 
	I0420 00:14:34.105941   94171 main.go:141] libmachine: (ha-371738) KVM machine creation complete!
	I0420 00:14:34.106307   94171 main.go:141] libmachine: (ha-371738) Calling .GetConfigRaw
	I0420 00:14:34.106895   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:14:34.107127   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:14:34.107347   94171 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0420 00:14:34.107364   94171 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:14:34.108798   94171 main.go:141] libmachine: Detecting operating system of created instance...
	I0420 00:14:34.108812   94171 main.go:141] libmachine: Waiting for SSH to be available...
	I0420 00:14:34.108818   94171 main.go:141] libmachine: Getting to WaitForSSH function...
	I0420 00:14:34.108824   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:34.111034   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.111469   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:34.111496   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.111612   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:34.111777   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.111966   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.112133   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:34.112316   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:14:34.112578   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:14:34.112594   94171 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0420 00:14:34.224894   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:14:34.224930   94171 main.go:141] libmachine: Detecting the provisioner...
	I0420 00:14:34.224941   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:34.227796   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.228170   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:34.228224   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.228436   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:34.228645   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.228805   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.228940   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:34.229109   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:14:34.229290   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:14:34.229304   94171 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0420 00:14:34.342472   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0420 00:14:34.342556   94171 main.go:141] libmachine: found compatible host: buildroot
	I0420 00:14:34.342570   94171 main.go:141] libmachine: Provisioning with buildroot...
	I0420 00:14:34.342585   94171 main.go:141] libmachine: (ha-371738) Calling .GetMachineName
	I0420 00:14:34.342856   94171 buildroot.go:166] provisioning hostname "ha-371738"
	I0420 00:14:34.342889   94171 main.go:141] libmachine: (ha-371738) Calling .GetMachineName
	I0420 00:14:34.343087   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:34.345346   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.345652   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:34.345680   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.345763   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:34.345930   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.346080   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.346222   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:34.346409   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:14:34.346575   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:14:34.346587   94171 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-371738 && echo "ha-371738" | sudo tee /etc/hostname
	I0420 00:14:34.473096   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-371738
	
	I0420 00:14:34.473139   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:34.476156   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.476606   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:34.476637   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.476805   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:34.476969   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.477081   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.477208   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:34.477399   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:14:34.477589   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:14:34.477616   94171 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-371738' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-371738/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-371738' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 00:14:34.600268   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:14:34.600306   94171 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 00:14:34.600333   94171 buildroot.go:174] setting up certificates
	I0420 00:14:34.600375   94171 provision.go:84] configureAuth start
	I0420 00:14:34.600395   94171 main.go:141] libmachine: (ha-371738) Calling .GetMachineName
	I0420 00:14:34.600736   94171 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:14:34.603374   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.603748   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:34.603776   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.603967   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:34.606595   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.606977   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:34.607009   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.607144   94171 provision.go:143] copyHostCerts
	I0420 00:14:34.607180   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:14:34.607213   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 00:14:34.607223   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:14:34.607287   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 00:14:34.607365   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:14:34.607384   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 00:14:34.607388   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:14:34.607411   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 00:14:34.607452   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:14:34.607470   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 00:14:34.607477   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:14:34.607496   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 00:14:34.607542   94171 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.ha-371738 san=[127.0.0.1 192.168.39.217 ha-371738 localhost minikube]
	I0420 00:14:34.685937   94171 provision.go:177] copyRemoteCerts
	I0420 00:14:34.685996   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 00:14:34.686023   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:34.688755   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.689087   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:34.689118   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.689290   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:34.689506   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.689669   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:34.689817   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:14:34.776784   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0420 00:14:34.776857   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0420 00:14:34.806052   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0420 00:14:34.806117   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 00:14:34.833608   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0420 00:14:34.833691   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 00:14:34.860865   94171 provision.go:87] duration metric: took 260.468299ms to configureAuth
	I0420 00:14:34.860896   94171 buildroot.go:189] setting minikube options for container-runtime
	I0420 00:14:34.861106   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:14:34.861271   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:34.863727   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.864022   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:34.864074   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:34.864231   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:34.864433   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.864644   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:34.864784   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:34.864998   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:14:34.865242   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:14:34.865267   94171 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 00:14:35.166112   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 00:14:35.166143   94171 main.go:141] libmachine: Checking connection to Docker...
	I0420 00:14:35.166169   94171 main.go:141] libmachine: (ha-371738) Calling .GetURL
	I0420 00:14:35.167408   94171 main.go:141] libmachine: (ha-371738) DBG | Using libvirt version 6000000
	I0420 00:14:35.169613   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.169882   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:35.169904   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.170118   94171 main.go:141] libmachine: Docker is up and running!
	I0420 00:14:35.170132   94171 main.go:141] libmachine: Reticulating splines...
	I0420 00:14:35.170142   94171 client.go:171] duration metric: took 24.838966937s to LocalClient.Create
	I0420 00:14:35.170170   94171 start.go:167] duration metric: took 24.839039485s to libmachine.API.Create "ha-371738"
	I0420 00:14:35.170182   94171 start.go:293] postStartSetup for "ha-371738" (driver="kvm2")
	I0420 00:14:35.170197   94171 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 00:14:35.170221   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:14:35.170482   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 00:14:35.170514   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:35.172733   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.173061   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:35.173092   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.173227   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:35.173443   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:35.173600   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:35.173782   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:14:35.260241   94171 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 00:14:35.265205   94171 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 00:14:35.265232   94171 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 00:14:35.265305   94171 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 00:14:35.265414   94171 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 00:14:35.265427   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /etc/ssl/certs/837422.pem
	I0420 00:14:35.265548   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 00:14:35.276008   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:14:35.306759   94171 start.go:296] duration metric: took 136.561279ms for postStartSetup
	I0420 00:14:35.306816   94171 main.go:141] libmachine: (ha-371738) Calling .GetConfigRaw
	I0420 00:14:35.307395   94171 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:14:35.310155   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.310544   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:35.310574   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.310807   94171 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:14:35.310998   94171 start.go:128] duration metric: took 24.999017816s to createHost
	I0420 00:14:35.311024   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:35.313335   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.313642   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:35.313666   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.313804   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:35.313980   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:35.314127   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:35.314270   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:35.314414   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:14:35.314587   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:14:35.314602   94171 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 00:14:35.426203   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713572075.396742702
	
	I0420 00:14:35.426227   94171 fix.go:216] guest clock: 1713572075.396742702
	I0420 00:14:35.426234   94171 fix.go:229] Guest: 2024-04-20 00:14:35.396742702 +0000 UTC Remote: 2024-04-20 00:14:35.311011787 +0000 UTC m=+25.122723442 (delta=85.730915ms)
	I0420 00:14:35.426268   94171 fix.go:200] guest clock delta is within tolerance: 85.730915ms
	I0420 00:14:35.426274   94171 start.go:83] releasing machines lock for "ha-371738", held for 25.114360814s
	I0420 00:14:35.426296   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:14:35.426621   94171 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:14:35.429284   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.429644   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:35.429670   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.429814   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:14:35.430474   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:14:35.430681   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:14:35.430745   94171 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 00:14:35.430803   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:35.430890   94171 ssh_runner.go:195] Run: cat /version.json
	I0420 00:14:35.430907   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:14:35.433289   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.433570   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:35.433606   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.433748   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:35.433755   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.433928   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:35.434093   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:35.434153   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:35.434180   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:35.434258   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:14:35.434369   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:14:35.434535   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:14:35.434711   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:14:35.434899   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:14:35.539476   94171 ssh_runner.go:195] Run: systemctl --version
	I0420 00:14:35.546100   94171 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 00:14:35.707988   94171 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 00:14:35.715140   94171 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 00:14:35.715242   94171 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 00:14:35.736498   94171 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 00:14:35.736530   94171 start.go:494] detecting cgroup driver to use...
	I0420 00:14:35.736603   94171 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 00:14:35.755787   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 00:14:35.772011   94171 docker.go:217] disabling cri-docker service (if available) ...
	I0420 00:14:35.772081   94171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 00:14:35.788311   94171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 00:14:35.803910   94171 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 00:14:35.928875   94171 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 00:14:36.077151   94171 docker.go:233] disabling docker service ...
	I0420 00:14:36.077228   94171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 00:14:36.093913   94171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 00:14:36.107308   94171 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 00:14:36.245768   94171 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 00:14:36.356723   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 00:14:36.371396   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 00:14:36.391651   94171 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 00:14:36.391722   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:14:36.402637   94171 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 00:14:36.402701   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:14:36.413751   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:14:36.424657   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:14:36.435450   94171 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 00:14:36.446564   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:14:36.457469   94171 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:14:36.476084   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:14:36.486545   94171 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 00:14:36.495981   94171 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 00:14:36.496036   94171 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 00:14:36.510160   94171 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 00:14:36.519893   94171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:14:36.635739   94171 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 00:14:36.789171   94171 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 00:14:36.789252   94171 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 00:14:36.796872   94171 start.go:562] Will wait 60s for crictl version
	I0420 00:14:36.796925   94171 ssh_runner.go:195] Run: which crictl
	I0420 00:14:36.801585   94171 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 00:14:36.837894   94171 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 00:14:36.837990   94171 ssh_runner.go:195] Run: crio --version
	I0420 00:14:36.869135   94171 ssh_runner.go:195] Run: crio --version
	I0420 00:14:36.904998   94171 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 00:14:36.906493   94171 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:14:36.909156   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:36.909578   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:14:36.909610   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:14:36.909813   94171 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0420 00:14:36.914426   94171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:14:36.928301   94171 kubeadm.go:877] updating cluster {Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:1
92.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 00:14:36.928403   94171 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:14:36.928447   94171 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:14:36.967553   94171 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 00:14:36.967691   94171 ssh_runner.go:195] Run: which lz4
	I0420 00:14:36.972305   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0420 00:14:36.972384   94171 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 00:14:36.976978   94171 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 00:14:36.977009   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 00:14:38.655166   94171 crio.go:462] duration metric: took 1.682799034s to copy over tarball
	I0420 00:14:38.655238   94171 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 00:14:41.019902   94171 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.36463309s)
	I0420 00:14:41.019937   94171 crio.go:469] duration metric: took 2.364739736s to extract the tarball
	I0420 00:14:41.019945   94171 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 00:14:41.059584   94171 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:14:41.111191   94171 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 00:14:41.111222   94171 cache_images.go:84] Images are preloaded, skipping loading
	I0420 00:14:41.111232   94171 kubeadm.go:928] updating node { 192.168.39.217 8443 v1.30.0 crio true true} ...
	I0420 00:14:41.111369   94171 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-371738 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 00:14:41.111435   94171 ssh_runner.go:195] Run: crio config
	I0420 00:14:41.165524   94171 cni.go:84] Creating CNI manager for ""
	I0420 00:14:41.165550   94171 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0420 00:14:41.165562   94171 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 00:14:41.165583   94171 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-371738 NodeName:ha-371738 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 00:14:41.165742   94171 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-371738"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 00:14:41.165767   94171 kube-vip.go:111] generating kube-vip config ...
	I0420 00:14:41.165808   94171 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0420 00:14:41.183420   94171 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0420 00:14:41.183562   94171 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0420 00:14:41.183644   94171 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 00:14:41.194986   94171 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 00:14:41.195057   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0420 00:14:41.206454   94171 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0420 00:14:41.225330   94171 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 00:14:41.244259   94171 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0420 00:14:41.263430   94171 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0420 00:14:41.283045   94171 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0420 00:14:41.287585   94171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:14:41.302261   94171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:14:41.426221   94171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:14:41.451386   94171 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738 for IP: 192.168.39.217
	I0420 00:14:41.451413   94171 certs.go:194] generating shared ca certs ...
	I0420 00:14:41.451436   94171 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:14:41.451588   94171 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 00:14:41.451630   94171 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 00:14:41.451648   94171 certs.go:256] generating profile certs ...
	I0420 00:14:41.451696   94171 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key
	I0420 00:14:41.451709   94171 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.crt with IP's: []
	I0420 00:14:41.558257   94171 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.crt ...
	I0420 00:14:41.558289   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.crt: {Name:mk37036e41ddddbb176e3a2220121f170aa3b61d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:14:41.558481   94171 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key ...
	I0420 00:14:41.558496   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key: {Name:mk4a286ff198053f6c6692e73c8407a1abbd3471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:14:41.558603   94171 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.1a7612e1
	I0420 00:14:41.558621   94171 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.1a7612e1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.254]
	I0420 00:14:41.683925   94171 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.1a7612e1 ...
	I0420 00:14:41.683954   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.1a7612e1: {Name:mkffd2bb9c98164ef687ab11af6ed48e5403c4b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:14:41.684149   94171 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.1a7612e1 ...
	I0420 00:14:41.684168   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.1a7612e1: {Name:mkf33c0579e0c29722688b1a37c41c0ea7e506dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:14:41.684263   94171 certs.go:381] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.1a7612e1 -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt
	I0420 00:14:41.684337   94171 certs.go:385] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.1a7612e1 -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key
	I0420 00:14:41.684396   94171 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key
	I0420 00:14:41.684413   94171 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt with IP's: []
	I0420 00:14:41.727747   94171 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt ...
	I0420 00:14:41.727776   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt: {Name:mk29033f64798e7acd5af0c56f6c48c6e244f1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:14:41.727972   94171 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key ...
	I0420 00:14:41.727991   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key: {Name:mk42c86039cd6e2255e65e9ba5d6ceb201c5e13e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:14:41.728098   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0420 00:14:41.728119   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0420 00:14:41.728129   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0420 00:14:41.728150   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0420 00:14:41.728163   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0420 00:14:41.728176   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0420 00:14:41.728188   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0420 00:14:41.728197   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0420 00:14:41.728243   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 00:14:41.728280   94171 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 00:14:41.728296   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 00:14:41.728322   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 00:14:41.728347   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 00:14:41.728368   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 00:14:41.728404   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:14:41.728428   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem -> /usr/share/ca-certificates/83742.pem
	I0420 00:14:41.728441   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /usr/share/ca-certificates/837422.pem
	I0420 00:14:41.728453   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:14:41.729061   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 00:14:41.765127   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 00:14:41.794438   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 00:14:41.822431   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 00:14:41.849785   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0420 00:14:41.877705   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0420 00:14:41.906946   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 00:14:41.936879   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 00:14:41.966624   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 00:14:41.997130   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 00:14:42.026360   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 00:14:42.056158   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 00:14:42.075714   94171 ssh_runner.go:195] Run: openssl version
	I0420 00:14:42.082311   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 00:14:42.094512   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 00:14:42.099778   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 00:14:42.099854   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 00:14:42.106407   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 00:14:42.118386   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 00:14:42.130132   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 00:14:42.134876   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 00:14:42.134933   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 00:14:42.141382   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 00:14:42.153942   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 00:14:42.166764   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:14:42.172358   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:14:42.172412   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:14:42.178712   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 00:14:42.190652   94171 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 00:14:42.195594   94171 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0420 00:14:42.195653   94171 kubeadm.go:391] StartCluster: {Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.
168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:14:42.195745   94171 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 00:14:42.195787   94171 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 00:14:42.248782   94171 cri.go:89] found id: ""
	I0420 00:14:42.248879   94171 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0420 00:14:42.261013   94171 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 00:14:42.276919   94171 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 00:14:42.291045   94171 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 00:14:42.291074   94171 kubeadm.go:156] found existing configuration files:
	
	I0420 00:14:42.291123   94171 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 00:14:42.308551   94171 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 00:14:42.308611   94171 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 00:14:42.322614   94171 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 00:14:42.334712   94171 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 00:14:42.334796   94171 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 00:14:42.347360   94171 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 00:14:42.359153   94171 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 00:14:42.359223   94171 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 00:14:42.370898   94171 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 00:14:42.382391   94171 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 00:14:42.382455   94171 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 00:14:42.394376   94171 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 00:14:42.644913   94171 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 00:14:56.659091   94171 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 00:14:56.659180   94171 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 00:14:56.659277   94171 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 00:14:56.659379   94171 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 00:14:56.659489   94171 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 00:14:56.659576   94171 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 00:14:56.661227   94171 out.go:204]   - Generating certificates and keys ...
	I0420 00:14:56.661334   94171 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 00:14:56.661412   94171 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 00:14:56.661475   94171 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0420 00:14:56.661524   94171 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0420 00:14:56.661574   94171 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0420 00:14:56.661644   94171 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0420 00:14:56.661724   94171 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0420 00:14:56.661852   94171 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-371738 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0420 00:14:56.661941   94171 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0420 00:14:56.662070   94171 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-371738 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0420 00:14:56.662199   94171 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0420 00:14:56.662299   94171 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0420 00:14:56.662361   94171 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0420 00:14:56.662436   94171 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 00:14:56.662501   94171 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 00:14:56.662579   94171 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 00:14:56.662654   94171 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 00:14:56.662717   94171 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 00:14:56.662798   94171 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 00:14:56.662902   94171 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 00:14:56.662986   94171 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 00:14:56.664457   94171 out.go:204]   - Booting up control plane ...
	I0420 00:14:56.664557   94171 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 00:14:56.664643   94171 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 00:14:56.664723   94171 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 00:14:56.664836   94171 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 00:14:56.664955   94171 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 00:14:56.665016   94171 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 00:14:56.665166   94171 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 00:14:56.665230   94171 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 00:14:56.665282   94171 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.15633ms
	I0420 00:14:56.665382   94171 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 00:14:56.665477   94171 kubeadm.go:309] [api-check] The API server is healthy after 9.040398858s
	I0420 00:14:56.665596   94171 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 00:14:56.665697   94171 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 00:14:56.665784   94171 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 00:14:56.665933   94171 kubeadm.go:309] [mark-control-plane] Marking the node ha-371738 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 00:14:56.666019   94171 kubeadm.go:309] [bootstrap-token] Using token: 7d4v3p.d8unl1jztmptssyo
	I0420 00:14:56.667261   94171 out.go:204]   - Configuring RBAC rules ...
	I0420 00:14:56.667354   94171 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 00:14:56.667463   94171 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 00:14:56.667585   94171 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 00:14:56.667698   94171 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 00:14:56.667833   94171 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 00:14:56.667952   94171 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 00:14:56.668086   94171 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 00:14:56.668144   94171 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 00:14:56.668212   94171 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 00:14:56.668222   94171 kubeadm.go:309] 
	I0420 00:14:56.668306   94171 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 00:14:56.668317   94171 kubeadm.go:309] 
	I0420 00:14:56.668378   94171 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 00:14:56.668384   94171 kubeadm.go:309] 
	I0420 00:14:56.668429   94171 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 00:14:56.668478   94171 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 00:14:56.668524   94171 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 00:14:56.668530   94171 kubeadm.go:309] 
	I0420 00:14:56.668611   94171 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 00:14:56.668621   94171 kubeadm.go:309] 
	I0420 00:14:56.668685   94171 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 00:14:56.668695   94171 kubeadm.go:309] 
	I0420 00:14:56.668765   94171 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 00:14:56.668865   94171 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 00:14:56.668972   94171 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 00:14:56.668984   94171 kubeadm.go:309] 
	I0420 00:14:56.669100   94171 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 00:14:56.669172   94171 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 00:14:56.669184   94171 kubeadm.go:309] 
	I0420 00:14:56.669295   94171 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 7d4v3p.d8unl1jztmptssyo \
	I0420 00:14:56.669427   94171 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 00:14:56.669468   94171 kubeadm.go:309] 	--control-plane 
	I0420 00:14:56.669483   94171 kubeadm.go:309] 
	I0420 00:14:56.669591   94171 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 00:14:56.669600   94171 kubeadm.go:309] 
	I0420 00:14:56.669713   94171 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 7d4v3p.d8unl1jztmptssyo \
	I0420 00:14:56.669817   94171 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 00:14:56.669838   94171 cni.go:84] Creating CNI manager for ""
	I0420 00:14:56.669846   94171 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0420 00:14:56.671350   94171 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0420 00:14:56.672615   94171 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0420 00:14:56.678620   94171 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0420 00:14:56.678638   94171 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0420 00:14:56.697166   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0420 00:14:57.036496   94171 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 00:14:57.036606   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:14:57.036621   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-371738 minikube.k8s.io/updated_at=2024_04_20T00_14_57_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=ha-371738 minikube.k8s.io/primary=true
	I0420 00:14:57.055480   94171 ops.go:34] apiserver oom_adj: -16
	I0420 00:14:57.278503   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:14:57.779520   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:14:58.279319   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:14:58.778705   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:14:59.278539   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:14:59.779557   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:00.279323   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:00.778642   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:01.279519   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:01.778739   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:02.278673   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:02.779134   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:03.279576   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:03.778637   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:04.279543   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:04.779335   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:05.279229   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:05.778563   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:06.279216   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:06.779166   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:15:06.891209   94171 kubeadm.go:1107] duration metric: took 9.854679932s to wait for elevateKubeSystemPrivileges
	W0420 00:15:06.891255   94171 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 00:15:06.891267   94171 kubeadm.go:393] duration metric: took 24.695616998s to StartCluster
	I0420 00:15:06.891290   94171 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:15:06.891378   94171 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 00:15:06.892094   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:15:06.892339   94171 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:15:06.892370   94171 start.go:240] waiting for startup goroutines ...
	I0420 00:15:06.892352   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0420 00:15:06.892364   94171 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 00:15:06.892428   94171 addons.go:69] Setting storage-provisioner=true in profile "ha-371738"
	I0420 00:15:06.892465   94171 addons.go:69] Setting default-storageclass=true in profile "ha-371738"
	I0420 00:15:06.892478   94171 addons.go:234] Setting addon storage-provisioner=true in "ha-371738"
	I0420 00:15:06.892516   94171 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:15:06.892523   94171 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-371738"
	I0420 00:15:06.892554   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:15:06.892891   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:06.892936   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:06.893002   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:06.893041   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:06.908100   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I0420 00:15:06.908133   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33961
	I0420 00:15:06.908589   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:06.908639   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:06.909103   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:06.909124   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:06.909104   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:06.909177   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:06.909501   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:06.909534   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:06.909690   94171 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:15:06.910074   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:06.910107   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:06.911961   94171 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 00:15:06.912378   94171 kapi.go:59] client config for ha-371738: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.crt", KeyFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key", CAFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0420 00:15:06.913054   94171 cert_rotation.go:137] Starting client certificate rotation controller
	I0420 00:15:06.913382   94171 addons.go:234] Setting addon default-storageclass=true in "ha-371738"
	I0420 00:15:06.913430   94171 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:15:06.913807   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:06.913844   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:06.925232   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36041
	I0420 00:15:06.925655   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:06.926207   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:06.926237   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:06.926561   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:06.926788   94171 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:15:06.928162   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41083
	I0420 00:15:06.928490   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:15:06.928631   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:06.930698   94171 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 00:15:06.929066   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:06.930741   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:06.931033   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:06.932320   94171 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 00:15:06.932337   94171 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 00:15:06.932355   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:15:06.932918   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:06.932958   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:06.935247   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:15:06.935715   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:15:06.935744   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:15:06.935865   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:15:06.936139   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:15:06.936317   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:15:06.936488   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:15:06.948490   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33721
	I0420 00:15:06.948957   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:06.949472   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:06.949498   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:06.949825   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:06.950017   94171 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:15:06.951437   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:15:06.951723   94171 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 00:15:06.951743   94171 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 00:15:06.951761   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:15:06.955093   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:15:06.955543   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:15:06.955571   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:15:06.955726   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:15:06.955923   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:15:06.956054   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:15:06.956224   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:15:07.028723   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0420 00:15:07.146817   94171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 00:15:07.180824   94171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 00:15:07.523477   94171 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0420 00:15:07.786608   94171 main.go:141] libmachine: Making call to close driver server
	I0420 00:15:07.786638   94171 main.go:141] libmachine: Making call to close driver server
	I0420 00:15:07.786658   94171 main.go:141] libmachine: (ha-371738) Calling .Close
	I0420 00:15:07.786645   94171 main.go:141] libmachine: (ha-371738) Calling .Close
	I0420 00:15:07.787116   94171 main.go:141] libmachine: (ha-371738) DBG | Closing plugin on server side
	I0420 00:15:07.787125   94171 main.go:141] libmachine: (ha-371738) DBG | Closing plugin on server side
	I0420 00:15:07.787122   94171 main.go:141] libmachine: Successfully made call to close driver server
	I0420 00:15:07.787144   94171 main.go:141] libmachine: Successfully made call to close driver server
	I0420 00:15:07.787160   94171 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 00:15:07.787175   94171 main.go:141] libmachine: Making call to close driver server
	I0420 00:15:07.787147   94171 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 00:15:07.787198   94171 main.go:141] libmachine: (ha-371738) Calling .Close
	I0420 00:15:07.787279   94171 main.go:141] libmachine: Making call to close driver server
	I0420 00:15:07.787318   94171 main.go:141] libmachine: (ha-371738) Calling .Close
	I0420 00:15:07.787420   94171 main.go:141] libmachine: Successfully made call to close driver server
	I0420 00:15:07.787436   94171 main.go:141] libmachine: (ha-371738) DBG | Closing plugin on server side
	I0420 00:15:07.787447   94171 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 00:15:07.787610   94171 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0420 00:15:07.787630   94171 round_trippers.go:469] Request Headers:
	I0420 00:15:07.787641   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:15:07.787656   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:15:07.787683   94171 main.go:141] libmachine: Successfully made call to close driver server
	I0420 00:15:07.787696   94171 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 00:15:07.805769   94171 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0420 00:15:07.806728   94171 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0420 00:15:07.806749   94171 round_trippers.go:469] Request Headers:
	I0420 00:15:07.806761   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:15:07.806769   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:15:07.806780   94171 round_trippers.go:473]     Content-Type: application/json
	I0420 00:15:07.813984   94171 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0420 00:15:07.814146   94171 main.go:141] libmachine: Making call to close driver server
	I0420 00:15:07.814162   94171 main.go:141] libmachine: (ha-371738) Calling .Close
	I0420 00:15:07.814465   94171 main.go:141] libmachine: Successfully made call to close driver server
	I0420 00:15:07.814486   94171 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 00:15:07.814488   94171 main.go:141] libmachine: (ha-371738) DBG | Closing plugin on server side
	I0420 00:15:07.816664   94171 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0420 00:15:07.817831   94171 addons.go:505] duration metric: took 925.463435ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0420 00:15:07.817867   94171 start.go:245] waiting for cluster config update ...
	I0420 00:15:07.817878   94171 start.go:254] writing updated cluster config ...
	I0420 00:15:07.819351   94171 out.go:177] 
	I0420 00:15:07.820931   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:15:07.820997   94171 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:15:07.822681   94171 out.go:177] * Starting "ha-371738-m02" control-plane node in "ha-371738" cluster
	I0420 00:15:07.823905   94171 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:15:07.823927   94171 cache.go:56] Caching tarball of preloaded images
	I0420 00:15:07.824002   94171 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 00:15:07.824010   94171 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 00:15:07.824103   94171 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:15:07.824306   94171 start.go:360] acquireMachinesLock for ha-371738-m02: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 00:15:07.824372   94171 start.go:364] duration metric: took 38.785µs to acquireMachinesLock for "ha-371738-m02"
	I0420 00:15:07.824396   94171 start.go:93] Provisioning new machine with config: &{Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:defau
lt APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/j
enkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:15:07.824561   94171 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0420 00:15:07.826178   94171 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0420 00:15:07.826258   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:07.826281   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:07.840880   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I0420 00:15:07.841345   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:07.841881   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:07.841906   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:07.842229   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:07.842434   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetMachineName
	I0420 00:15:07.842585   94171 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:15:07.842757   94171 start.go:159] libmachine.API.Create for "ha-371738" (driver="kvm2")
	I0420 00:15:07.842781   94171 client.go:168] LocalClient.Create starting
	I0420 00:15:07.842815   94171 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem
	I0420 00:15:07.842858   94171 main.go:141] libmachine: Decoding PEM data...
	I0420 00:15:07.842879   94171 main.go:141] libmachine: Parsing certificate...
	I0420 00:15:07.842964   94171 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem
	I0420 00:15:07.842992   94171 main.go:141] libmachine: Decoding PEM data...
	I0420 00:15:07.843013   94171 main.go:141] libmachine: Parsing certificate...
	I0420 00:15:07.843038   94171 main.go:141] libmachine: Running pre-create checks...
	I0420 00:15:07.843048   94171 main.go:141] libmachine: (ha-371738-m02) Calling .PreCreateCheck
	I0420 00:15:07.843203   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetConfigRaw
	I0420 00:15:07.843717   94171 main.go:141] libmachine: Creating machine...
	I0420 00:15:07.843731   94171 main.go:141] libmachine: (ha-371738-m02) Calling .Create
	I0420 00:15:07.843870   94171 main.go:141] libmachine: (ha-371738-m02) Creating KVM machine...
	I0420 00:15:07.845062   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found existing default KVM network
	I0420 00:15:07.845209   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found existing private KVM network mk-ha-371738
	I0420 00:15:07.845364   94171 main.go:141] libmachine: (ha-371738-m02) Setting up store path in /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02 ...
	I0420 00:15:07.845386   94171 main.go:141] libmachine: (ha-371738-m02) Building disk image from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0420 00:15:07.845434   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:07.845334   94574 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:15:07.845521   94171 main.go:141] libmachine: (ha-371738-m02) Downloading /home/jenkins/minikube-integration/18703-76456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0420 00:15:08.075190   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:08.075057   94574 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa...
	I0420 00:15:08.268872   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:08.268746   94574 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/ha-371738-m02.rawdisk...
	I0420 00:15:08.268918   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Writing magic tar header
	I0420 00:15:08.268933   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Writing SSH key tar header
	I0420 00:15:08.268954   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:08.268860   94574 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02 ...
	I0420 00:15:08.268971   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02
	I0420 00:15:08.268996   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines
	I0420 00:15:08.269013   94171 main.go:141] libmachine: (ha-371738-m02) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02 (perms=drwx------)
	I0420 00:15:08.269035   94171 main.go:141] libmachine: (ha-371738-m02) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines (perms=drwxr-xr-x)
	I0420 00:15:08.269050   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:15:08.269062   94171 main.go:141] libmachine: (ha-371738-m02) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube (perms=drwxr-xr-x)
	I0420 00:15:08.269073   94171 main.go:141] libmachine: (ha-371738-m02) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456 (perms=drwxrwxr-x)
	I0420 00:15:08.269079   94171 main.go:141] libmachine: (ha-371738-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0420 00:15:08.269086   94171 main.go:141] libmachine: (ha-371738-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0420 00:15:08.269091   94171 main.go:141] libmachine: (ha-371738-m02) Creating domain...
	I0420 00:15:08.269103   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456
	I0420 00:15:08.269109   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0420 00:15:08.269141   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Checking permissions on dir: /home/jenkins
	I0420 00:15:08.269163   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Checking permissions on dir: /home
	I0420 00:15:08.269208   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Skipping /home - not owner
	I0420 00:15:08.269996   94171 main.go:141] libmachine: (ha-371738-m02) define libvirt domain using xml: 
	I0420 00:15:08.270012   94171 main.go:141] libmachine: (ha-371738-m02) <domain type='kvm'>
	I0420 00:15:08.270018   94171 main.go:141] libmachine: (ha-371738-m02)   <name>ha-371738-m02</name>
	I0420 00:15:08.270023   94171 main.go:141] libmachine: (ha-371738-m02)   <memory unit='MiB'>2200</memory>
	I0420 00:15:08.270028   94171 main.go:141] libmachine: (ha-371738-m02)   <vcpu>2</vcpu>
	I0420 00:15:08.270033   94171 main.go:141] libmachine: (ha-371738-m02)   <features>
	I0420 00:15:08.270041   94171 main.go:141] libmachine: (ha-371738-m02)     <acpi/>
	I0420 00:15:08.270047   94171 main.go:141] libmachine: (ha-371738-m02)     <apic/>
	I0420 00:15:08.270060   94171 main.go:141] libmachine: (ha-371738-m02)     <pae/>
	I0420 00:15:08.270070   94171 main.go:141] libmachine: (ha-371738-m02)     
	I0420 00:15:08.270081   94171 main.go:141] libmachine: (ha-371738-m02)   </features>
	I0420 00:15:08.270087   94171 main.go:141] libmachine: (ha-371738-m02)   <cpu mode='host-passthrough'>
	I0420 00:15:08.270094   94171 main.go:141] libmachine: (ha-371738-m02)   
	I0420 00:15:08.270102   94171 main.go:141] libmachine: (ha-371738-m02)   </cpu>
	I0420 00:15:08.270110   94171 main.go:141] libmachine: (ha-371738-m02)   <os>
	I0420 00:15:08.270121   94171 main.go:141] libmachine: (ha-371738-m02)     <type>hvm</type>
	I0420 00:15:08.270132   94171 main.go:141] libmachine: (ha-371738-m02)     <boot dev='cdrom'/>
	I0420 00:15:08.270142   94171 main.go:141] libmachine: (ha-371738-m02)     <boot dev='hd'/>
	I0420 00:15:08.270154   94171 main.go:141] libmachine: (ha-371738-m02)     <bootmenu enable='no'/>
	I0420 00:15:08.270161   94171 main.go:141] libmachine: (ha-371738-m02)   </os>
	I0420 00:15:08.270172   94171 main.go:141] libmachine: (ha-371738-m02)   <devices>
	I0420 00:15:08.270187   94171 main.go:141] libmachine: (ha-371738-m02)     <disk type='file' device='cdrom'>
	I0420 00:15:08.270203   94171 main.go:141] libmachine: (ha-371738-m02)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/boot2docker.iso'/>
	I0420 00:15:08.270213   94171 main.go:141] libmachine: (ha-371738-m02)       <target dev='hdc' bus='scsi'/>
	I0420 00:15:08.270240   94171 main.go:141] libmachine: (ha-371738-m02)       <readonly/>
	I0420 00:15:08.270250   94171 main.go:141] libmachine: (ha-371738-m02)     </disk>
	I0420 00:15:08.270289   94171 main.go:141] libmachine: (ha-371738-m02)     <disk type='file' device='disk'>
	I0420 00:15:08.270321   94171 main.go:141] libmachine: (ha-371738-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0420 00:15:08.270352   94171 main.go:141] libmachine: (ha-371738-m02)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/ha-371738-m02.rawdisk'/>
	I0420 00:15:08.270447   94171 main.go:141] libmachine: (ha-371738-m02)       <target dev='hda' bus='virtio'/>
	I0420 00:15:08.270468   94171 main.go:141] libmachine: (ha-371738-m02)     </disk>
	I0420 00:15:08.270474   94171 main.go:141] libmachine: (ha-371738-m02)     <interface type='network'>
	I0420 00:15:08.270484   94171 main.go:141] libmachine: (ha-371738-m02)       <source network='mk-ha-371738'/>
	I0420 00:15:08.270492   94171 main.go:141] libmachine: (ha-371738-m02)       <model type='virtio'/>
	I0420 00:15:08.270498   94171 main.go:141] libmachine: (ha-371738-m02)     </interface>
	I0420 00:15:08.270505   94171 main.go:141] libmachine: (ha-371738-m02)     <interface type='network'>
	I0420 00:15:08.270523   94171 main.go:141] libmachine: (ha-371738-m02)       <source network='default'/>
	I0420 00:15:08.270538   94171 main.go:141] libmachine: (ha-371738-m02)       <model type='virtio'/>
	I0420 00:15:08.270551   94171 main.go:141] libmachine: (ha-371738-m02)     </interface>
	I0420 00:15:08.270566   94171 main.go:141] libmachine: (ha-371738-m02)     <serial type='pty'>
	I0420 00:15:08.270578   94171 main.go:141] libmachine: (ha-371738-m02)       <target port='0'/>
	I0420 00:15:08.270588   94171 main.go:141] libmachine: (ha-371738-m02)     </serial>
	I0420 00:15:08.270600   94171 main.go:141] libmachine: (ha-371738-m02)     <console type='pty'>
	I0420 00:15:08.270612   94171 main.go:141] libmachine: (ha-371738-m02)       <target type='serial' port='0'/>
	I0420 00:15:08.270623   94171 main.go:141] libmachine: (ha-371738-m02)     </console>
	I0420 00:15:08.270640   94171 main.go:141] libmachine: (ha-371738-m02)     <rng model='virtio'>
	I0420 00:15:08.270654   94171 main.go:141] libmachine: (ha-371738-m02)       <backend model='random'>/dev/random</backend>
	I0420 00:15:08.270663   94171 main.go:141] libmachine: (ha-371738-m02)     </rng>
	I0420 00:15:08.270671   94171 main.go:141] libmachine: (ha-371738-m02)     
	I0420 00:15:08.270680   94171 main.go:141] libmachine: (ha-371738-m02)     
	I0420 00:15:08.270692   94171 main.go:141] libmachine: (ha-371738-m02)   </devices>
	I0420 00:15:08.270702   94171 main.go:141] libmachine: (ha-371738-m02) </domain>
	I0420 00:15:08.270712   94171 main.go:141] libmachine: (ha-371738-m02) 
	I0420 00:15:08.277278   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:5e:5f:4d in network default
	I0420 00:15:08.277899   94171 main.go:141] libmachine: (ha-371738-m02) Ensuring networks are active...
	I0420 00:15:08.277922   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:08.278576   94171 main.go:141] libmachine: (ha-371738-m02) Ensuring network default is active
	I0420 00:15:08.278949   94171 main.go:141] libmachine: (ha-371738-m02) Ensuring network mk-ha-371738 is active
	I0420 00:15:08.279295   94171 main.go:141] libmachine: (ha-371738-m02) Getting domain xml...
	I0420 00:15:08.280066   94171 main.go:141] libmachine: (ha-371738-m02) Creating domain...
	I0420 00:15:09.479657   94171 main.go:141] libmachine: (ha-371738-m02) Waiting to get IP...
	I0420 00:15:09.480611   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:09.480992   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:09.481045   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:09.480994   94574 retry.go:31] will retry after 304.170036ms: waiting for machine to come up
	I0420 00:15:09.786359   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:09.786916   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:09.786948   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:09.786882   94574 retry.go:31] will retry after 243.704709ms: waiting for machine to come up
	I0420 00:15:10.032349   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:10.032828   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:10.032863   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:10.032767   94574 retry.go:31] will retry after 376.540423ms: waiting for machine to come up
	I0420 00:15:10.411306   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:10.411841   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:10.411865   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:10.411775   94574 retry.go:31] will retry after 487.578156ms: waiting for machine to come up
	I0420 00:15:10.901455   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:10.901951   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:10.901986   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:10.901924   94574 retry.go:31] will retry after 589.95165ms: waiting for machine to come up
	I0420 00:15:11.493802   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:11.494275   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:11.494343   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:11.494230   94574 retry.go:31] will retry after 645.321602ms: waiting for machine to come up
	I0420 00:15:12.140990   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:12.141406   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:12.141434   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:12.141358   94574 retry.go:31] will retry after 757.810418ms: waiting for machine to come up
	I0420 00:15:12.901051   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:12.901506   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:12.901534   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:12.901460   94574 retry.go:31] will retry after 1.170896015s: waiting for machine to come up
	I0420 00:15:14.073666   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:14.074068   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:14.074097   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:14.074007   94574 retry.go:31] will retry after 1.501764207s: waiting for machine to come up
	I0420 00:15:15.577571   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:15.577934   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:15.577990   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:15.577901   94574 retry.go:31] will retry after 2.27309831s: waiting for machine to come up
	I0420 00:15:17.852548   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:17.853040   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:17.853068   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:17.852996   94574 retry.go:31] will retry after 2.900030711s: waiting for machine to come up
	I0420 00:15:20.754252   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:20.754731   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:20.754766   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:20.754651   94574 retry.go:31] will retry after 2.698308641s: waiting for machine to come up
	I0420 00:15:23.454454   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:23.454855   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:23.454884   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:23.454804   94574 retry.go:31] will retry after 4.201613554s: waiting for machine to come up
	I0420 00:15:27.658762   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:27.659200   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find current IP address of domain ha-371738-m02 in network mk-ha-371738
	I0420 00:15:27.659228   94171 main.go:141] libmachine: (ha-371738-m02) DBG | I0420 00:15:27.659137   94574 retry.go:31] will retry after 4.466090921s: waiting for machine to come up
	I0420 00:15:32.127839   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.128261   94171 main.go:141] libmachine: (ha-371738-m02) Found IP for machine: 192.168.39.48
	I0420 00:15:32.128288   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has current primary IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.128298   94171 main.go:141] libmachine: (ha-371738-m02) Reserving static IP address...
	I0420 00:15:32.128650   94171 main.go:141] libmachine: (ha-371738-m02) DBG | unable to find host DHCP lease matching {name: "ha-371738-m02", mac: "52:54:00:3b:ab:c8", ip: "192.168.39.48"} in network mk-ha-371738
	I0420 00:15:32.199433   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Getting to WaitForSSH function...
	I0420 00:15:32.199464   94171 main.go:141] libmachine: (ha-371738-m02) Reserved static IP address: 192.168.39.48
	I0420 00:15:32.199477   94171 main.go:141] libmachine: (ha-371738-m02) Waiting for SSH to be available...
	I0420 00:15:32.202838   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.203265   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:32.203302   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.203426   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Using SSH client type: external
	I0420 00:15:32.203459   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa (-rw-------)
	I0420 00:15:32.203496   94171 main.go:141] libmachine: (ha-371738-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 00:15:32.203510   94171 main.go:141] libmachine: (ha-371738-m02) DBG | About to run SSH command:
	I0420 00:15:32.203523   94171 main.go:141] libmachine: (ha-371738-m02) DBG | exit 0
	I0420 00:15:32.325485   94171 main.go:141] libmachine: (ha-371738-m02) DBG | SSH cmd err, output: <nil>: 
	I0420 00:15:32.325786   94171 main.go:141] libmachine: (ha-371738-m02) KVM machine creation complete!
	I0420 00:15:32.326127   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetConfigRaw
	I0420 00:15:32.326719   94171 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:15:32.326936   94171 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:15:32.327114   94171 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0420 00:15:32.327130   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetState
	I0420 00:15:32.328417   94171 main.go:141] libmachine: Detecting operating system of created instance...
	I0420 00:15:32.328442   94171 main.go:141] libmachine: Waiting for SSH to be available...
	I0420 00:15:32.328448   94171 main.go:141] libmachine: Getting to WaitForSSH function...
	I0420 00:15:32.328454   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:32.330848   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.331211   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:32.331252   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.331396   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:32.331597   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:32.331772   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:32.331912   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:32.332053   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:15:32.332323   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0420 00:15:32.332340   94171 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0420 00:15:32.432750   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:15:32.432772   94171 main.go:141] libmachine: Detecting the provisioner...
	I0420 00:15:32.432779   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:32.435496   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.435911   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:32.435946   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.436049   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:32.436262   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:32.436447   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:32.436646   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:32.436816   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:15:32.436988   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0420 00:15:32.437002   94171 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0420 00:15:32.542787   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0420 00:15:32.542882   94171 main.go:141] libmachine: found compatible host: buildroot
	I0420 00:15:32.542892   94171 main.go:141] libmachine: Provisioning with buildroot...
	I0420 00:15:32.542899   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetMachineName
	I0420 00:15:32.543184   94171 buildroot.go:166] provisioning hostname "ha-371738-m02"
	I0420 00:15:32.543208   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetMachineName
	I0420 00:15:32.543401   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:32.546001   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.546491   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:32.546521   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.546684   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:32.546908   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:32.547089   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:32.547265   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:32.547484   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:15:32.547679   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0420 00:15:32.547690   94171 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-371738-m02 && echo "ha-371738-m02" | sudo tee /etc/hostname
	I0420 00:15:32.665216   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-371738-m02
	
	I0420 00:15:32.665240   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:32.668086   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.668484   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:32.668515   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.668702   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:32.668898   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:32.669060   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:32.669195   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:32.669390   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:15:32.669633   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0420 00:15:32.669658   94171 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-371738-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-371738-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-371738-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 00:15:32.791413   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:15:32.791448   94171 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 00:15:32.791466   94171 buildroot.go:174] setting up certificates
	I0420 00:15:32.791477   94171 provision.go:84] configureAuth start
	I0420 00:15:32.791485   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetMachineName
	I0420 00:15:32.791823   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetIP
	I0420 00:15:32.794666   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.795068   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:32.795097   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.795249   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:32.797673   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.798026   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:32.798051   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:32.798196   94171 provision.go:143] copyHostCerts
	I0420 00:15:32.798220   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:15:32.798247   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 00:15:32.798256   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:15:32.798315   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 00:15:32.798420   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:15:32.798440   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 00:15:32.798447   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:15:32.798476   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 00:15:32.798524   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:15:32.798546   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 00:15:32.798552   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:15:32.798572   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 00:15:32.798613   94171 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.ha-371738-m02 san=[127.0.0.1 192.168.39.48 ha-371738-m02 localhost minikube]
	I0420 00:15:33.245269   94171 provision.go:177] copyRemoteCerts
	I0420 00:15:33.245363   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 00:15:33.245388   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:33.248117   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.248513   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.248538   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.248681   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:33.248922   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:33.249107   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:33.249263   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa Username:docker}
	I0420 00:15:33.334547   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0420 00:15:33.334619   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 00:15:33.361714   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0420 00:15:33.361762   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0420 00:15:33.387454   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0420 00:15:33.387511   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 00:15:33.412605   94171 provision.go:87] duration metric: took 621.113895ms to configureAuth
	I0420 00:15:33.412636   94171 buildroot.go:189] setting minikube options for container-runtime
	I0420 00:15:33.412855   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:15:33.412944   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:33.415597   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.415879   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.415906   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.415998   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:33.416216   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:33.416384   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:33.416484   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:33.416670   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:15:33.416848   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0420 00:15:33.416869   94171 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 00:15:33.688228   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 00:15:33.688256   94171 main.go:141] libmachine: Checking connection to Docker...
	I0420 00:15:33.688266   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetURL
	I0420 00:15:33.689677   94171 main.go:141] libmachine: (ha-371738-m02) DBG | Using libvirt version 6000000
	I0420 00:15:33.691545   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.691849   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.691877   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.692069   94171 main.go:141] libmachine: Docker is up and running!
	I0420 00:15:33.692087   94171 main.go:141] libmachine: Reticulating splines...
	I0420 00:15:33.692094   94171 client.go:171] duration metric: took 25.849305358s to LocalClient.Create
	I0420 00:15:33.692118   94171 start.go:167] duration metric: took 25.849361585s to libmachine.API.Create "ha-371738"
	I0420 00:15:33.692131   94171 start.go:293] postStartSetup for "ha-371738-m02" (driver="kvm2")
	I0420 00:15:33.692145   94171 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 00:15:33.692176   94171 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:15:33.692399   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 00:15:33.692425   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:33.694378   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.694680   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.694710   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.694845   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:33.695030   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:33.695195   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:33.695311   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa Username:docker}
	I0420 00:15:33.777206   94171 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 00:15:33.781523   94171 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 00:15:33.781547   94171 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 00:15:33.781619   94171 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 00:15:33.781717   94171 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 00:15:33.781730   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /etc/ssl/certs/837422.pem
	I0420 00:15:33.781828   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 00:15:33.792378   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:15:33.818287   94171 start.go:296] duration metric: took 126.141852ms for postStartSetup
	I0420 00:15:33.818346   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetConfigRaw
	I0420 00:15:33.819026   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetIP
	I0420 00:15:33.821828   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.822285   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.822314   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.822576   94171 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:15:33.822762   94171 start.go:128] duration metric: took 25.998188301s to createHost
	I0420 00:15:33.822789   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:33.824995   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.825366   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.825394   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.825548   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:33.825726   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:33.825856   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:33.825990   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:33.826193   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:15:33.826344   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0420 00:15:33.826355   94171 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 00:15:33.926129   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713572133.910912741
	
	I0420 00:15:33.926157   94171 fix.go:216] guest clock: 1713572133.910912741
	I0420 00:15:33.926168   94171 fix.go:229] Guest: 2024-04-20 00:15:33.910912741 +0000 UTC Remote: 2024-04-20 00:15:33.822774494 +0000 UTC m=+83.634486162 (delta=88.138247ms)
	I0420 00:15:33.926187   94171 fix.go:200] guest clock delta is within tolerance: 88.138247ms
	I0420 00:15:33.926192   94171 start.go:83] releasing machines lock for "ha-371738-m02", held for 26.101808733s
	I0420 00:15:33.926213   94171 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:15:33.926499   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetIP
	I0420 00:15:33.929071   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.929522   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.929543   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.932165   94171 out.go:177] * Found network options:
	I0420 00:15:33.933543   94171 out.go:177]   - NO_PROXY=192.168.39.217
	W0420 00:15:33.934995   94171 proxy.go:119] fail to check proxy env: Error ip not in block
	I0420 00:15:33.935029   94171 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:15:33.935543   94171 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:15:33.935719   94171 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:15:33.935808   94171 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 00:15:33.935853   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	W0420 00:15:33.935917   94171 proxy.go:119] fail to check proxy env: Error ip not in block
	I0420 00:15:33.936006   94171 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 00:15:33.936020   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:15:33.938351   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.938565   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.938704   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.938720   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.938942   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:33.938952   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:33.938979   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:33.939143   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:15:33.939152   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:33.939355   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:33.939386   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:15:33.939495   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa Username:docker}
	I0420 00:15:33.939567   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:15:33.939687   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa Username:docker}
	I0420 00:15:34.183588   94171 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 00:15:34.190450   94171 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 00:15:34.190527   94171 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 00:15:34.207725   94171 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 00:15:34.207746   94171 start.go:494] detecting cgroup driver to use...
	I0420 00:15:34.207795   94171 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 00:15:34.225416   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 00:15:34.239344   94171 docker.go:217] disabling cri-docker service (if available) ...
	I0420 00:15:34.239396   94171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 00:15:34.255623   94171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 00:15:34.272311   94171 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 00:15:34.388599   94171 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 00:15:34.555101   94171 docker.go:233] disabling docker service ...
	I0420 00:15:34.555185   94171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 00:15:34.571384   94171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 00:15:34.584862   94171 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 00:15:34.713895   94171 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 00:15:34.843002   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 00:15:34.858864   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 00:15:34.879516   94171 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 00:15:34.879587   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:15:34.890626   94171 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 00:15:34.890692   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:15:34.902293   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:15:34.913463   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:15:34.924524   94171 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 00:15:34.936112   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:15:34.947267   94171 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:15:34.966535   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:15:34.977617   94171 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 00:15:34.987289   94171 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 00:15:34.987336   94171 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 00:15:35.001770   94171 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 00:15:35.013380   94171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:15:35.132279   94171 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 00:15:35.279321   94171 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 00:15:35.279416   94171 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 00:15:35.284623   94171 start.go:562] Will wait 60s for crictl version
	I0420 00:15:35.284683   94171 ssh_runner.go:195] Run: which crictl
	I0420 00:15:35.288850   94171 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 00:15:35.325397   94171 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 00:15:35.325472   94171 ssh_runner.go:195] Run: crio --version
	I0420 00:15:35.356336   94171 ssh_runner.go:195] Run: crio --version
	I0420 00:15:35.388553   94171 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 00:15:35.390133   94171 out.go:177]   - env NO_PROXY=192.168.39.217
	I0420 00:15:35.391263   94171 main.go:141] libmachine: (ha-371738-m02) Calling .GetIP
	I0420 00:15:35.394122   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:35.394503   94171 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:15:23 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:15:35.394539   94171 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:15:35.394745   94171 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0420 00:15:35.399139   94171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:15:35.412865   94171 mustload.go:65] Loading cluster: ha-371738
	I0420 00:15:35.413099   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:15:35.413403   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:35.413428   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:35.428152   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36737
	I0420 00:15:35.428568   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:35.429009   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:35.429029   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:35.429377   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:35.429556   94171 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:15:35.431068   94171 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:15:35.431368   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:35.431398   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:35.445434   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39481
	I0420 00:15:35.445841   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:35.446325   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:35.446356   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:35.446634   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:35.446824   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:15:35.447005   94171 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738 for IP: 192.168.39.48
	I0420 00:15:35.447016   94171 certs.go:194] generating shared ca certs ...
	I0420 00:15:35.447031   94171 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:15:35.447167   94171 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 00:15:35.447208   94171 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 00:15:35.447219   94171 certs.go:256] generating profile certs ...
	I0420 00:15:35.447291   94171 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key
	I0420 00:15:35.447316   94171 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.67002bff
	I0420 00:15:35.447336   94171 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.67002bff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.48 192.168.39.254]
	I0420 00:15:35.526118   94171 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.67002bff ...
	I0420 00:15:35.526149   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.67002bff: {Name:mk5a6afacdffd81cc24458df0cd2fcf66072f99f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:15:35.526333   94171 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.67002bff ...
	I0420 00:15:35.526350   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.67002bff: {Name:mkdbf1224bcff9fd3a1190522604ec463ca02a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:15:35.526451   94171 certs.go:381] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.67002bff -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt
	I0420 00:15:35.526589   94171 certs.go:385] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.67002bff -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key
	I0420 00:15:35.526717   94171 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key
	I0420 00:15:35.526735   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0420 00:15:35.526748   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0420 00:15:35.526761   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0420 00:15:35.526771   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0420 00:15:35.526782   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0420 00:15:35.526792   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0420 00:15:35.526801   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0420 00:15:35.526813   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0420 00:15:35.526865   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 00:15:35.526892   94171 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 00:15:35.526901   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 00:15:35.526920   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 00:15:35.526951   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 00:15:35.526971   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 00:15:35.527008   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:15:35.527032   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:15:35.527046   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem -> /usr/share/ca-certificates/83742.pem
	I0420 00:15:35.527058   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /usr/share/ca-certificates/837422.pem
	I0420 00:15:35.527090   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:15:35.529870   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:15:35.530323   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:15:35.530350   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:15:35.530658   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:15:35.530849   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:15:35.531021   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:15:35.531173   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:15:35.609690   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0420 00:15:35.615153   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0420 00:15:35.629469   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0420 00:15:35.634561   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0420 00:15:35.649926   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0420 00:15:35.655253   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0420 00:15:35.670201   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0420 00:15:35.674959   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0420 00:15:35.687122   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0420 00:15:35.692016   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0420 00:15:35.707599   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0420 00:15:35.712686   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0420 00:15:35.726635   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 00:15:35.754434   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 00:15:35.780962   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 00:15:35.806894   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 00:15:35.832911   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0420 00:15:35.860118   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0420 00:15:35.891694   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 00:15:35.917813   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 00:15:35.943651   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 00:15:35.969659   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 00:15:35.995643   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 00:15:36.022031   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0420 00:15:36.039822   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0420 00:15:36.058775   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0420 00:15:36.077102   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0420 00:15:36.095498   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0420 00:15:36.114478   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0420 00:15:36.135508   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0420 00:15:36.154815   94171 ssh_runner.go:195] Run: openssl version
	I0420 00:15:36.161381   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 00:15:36.173665   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:15:36.178745   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:15:36.178823   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:15:36.184809   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 00:15:36.196249   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 00:15:36.209703   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 00:15:36.215107   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 00:15:36.215149   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 00:15:36.221286   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 00:15:36.232842   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 00:15:36.244268   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 00:15:36.249153   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 00:15:36.249197   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 00:15:36.255323   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 00:15:36.267770   94171 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 00:15:36.273280   94171 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0420 00:15:36.273357   94171 kubeadm.go:928] updating node {m02 192.168.39.48 8443 v1.30.0 crio true true} ...
	I0420 00:15:36.273451   94171 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-371738-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 00:15:36.273480   94171 kube-vip.go:111] generating kube-vip config ...
	I0420 00:15:36.273517   94171 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0420 00:15:36.290259   94171 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0420 00:15:36.290328   94171 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0420 00:15:36.290379   94171 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 00:15:36.301623   94171 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0420 00:15:36.301675   94171 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0420 00:15:36.312406   94171 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0420 00:15:36.312430   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0420 00:15:36.312508   94171 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0420 00:15:36.312532   94171 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0420 00:15:36.312558   94171 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0420 00:15:36.318740   94171 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0420 00:15:36.318768   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0420 00:15:36.983078   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0420 00:15:36.983161   94171 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0420 00:15:36.989155   94171 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0420 00:15:36.989189   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0420 00:15:37.363032   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:15:37.379663   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0420 00:15:37.379738   94171 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0420 00:15:37.384472   94171 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0420 00:15:37.384504   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0420 00:15:37.851826   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0420 00:15:37.864931   94171 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0420 00:15:37.884017   94171 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 00:15:37.904291   94171 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0420 00:15:37.922824   94171 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0420 00:15:37.927295   94171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:15:37.941424   94171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:15:38.082599   94171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:15:38.103136   94171 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:15:38.103655   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:15:38.103699   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:15:38.119038   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I0420 00:15:38.119467   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:15:38.119966   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:15:38.119990   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:15:38.120416   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:15:38.120670   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:15:38.120871   94171 start.go:316] joinCluster: &{Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.16
8.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:15:38.120997   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0420 00:15:38.121023   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:15:38.124250   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:15:38.124735   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:15:38.124766   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:15:38.124882   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:15:38.125227   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:15:38.125400   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:15:38.125544   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:15:38.302630   94171 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:15:38.302694   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9o24la.0magggnm0kh9r5cv --discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-371738-m02 --control-plane --apiserver-advertise-address=192.168.39.48 --apiserver-bind-port=8443"
	I0420 00:16:01.876572   94171 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9o24la.0magggnm0kh9r5cv --discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-371738-m02 --control-plane --apiserver-advertise-address=192.168.39.48 --apiserver-bind-port=8443": (23.573845696s)
	I0420 00:16:01.876628   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0420 00:16:02.453472   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-371738-m02 minikube.k8s.io/updated_at=2024_04_20T00_16_02_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=ha-371738 minikube.k8s.io/primary=false
	I0420 00:16:02.600156   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-371738-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0420 00:16:02.726375   94171 start.go:318] duration metric: took 24.605498766s to joinCluster
	I0420 00:16:02.726459   94171 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:16:02.728078   94171 out.go:177] * Verifying Kubernetes components...
	I0420 00:16:02.726786   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:16:02.729388   94171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:16:02.984478   94171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:16:03.027056   94171 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 00:16:03.027286   94171 kapi.go:59] client config for ha-371738: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.crt", KeyFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key", CAFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0420 00:16:03.027347   94171 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.217:8443
	I0420 00:16:03.027701   94171 node_ready.go:35] waiting up to 6m0s for node "ha-371738-m02" to be "Ready" ...
	I0420 00:16:03.027819   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:03.027828   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:03.027835   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:03.027840   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:03.040597   94171 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0420 00:16:03.528251   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:03.528280   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:03.528292   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:03.528298   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:03.538500   94171 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0420 00:16:04.028366   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:04.028392   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:04.028402   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:04.028407   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:04.032845   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:04.527944   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:04.527973   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:04.527985   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:04.527990   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:04.534032   94171 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0420 00:16:05.028034   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:05.028059   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:05.028070   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:05.028075   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:05.037367   94171 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0420 00:16:05.038394   94171 node_ready.go:53] node "ha-371738-m02" has status "Ready":"False"
	I0420 00:16:05.528593   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:05.528619   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:05.528628   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:05.528635   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:05.533594   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:06.028799   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:06.028825   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:06.028834   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:06.028838   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:06.032412   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:06.528130   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:06.528160   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:06.528175   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:06.528182   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:06.531637   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:07.027993   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:07.028021   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.028030   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.028036   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.031484   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:07.528642   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:07.528671   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.528681   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.528688   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.532410   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:07.533072   94171 node_ready.go:49] node "ha-371738-m02" has status "Ready":"True"
	I0420 00:16:07.533096   94171 node_ready.go:38] duration metric: took 4.505362063s for node "ha-371738-m02" to be "Ready" ...
	I0420 00:16:07.533109   94171 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 00:16:07.533224   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:16:07.533238   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.533249   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.533257   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.542380   94171 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0420 00:16:07.550113   94171 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9hc82" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:07.550212   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9hc82
	I0420 00:16:07.550224   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.550233   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.550240   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.553841   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:07.554434   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:07.554450   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.554457   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.554462   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.558093   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:07.558719   94171 pod_ready.go:92] pod "coredns-7db6d8ff4d-9hc82" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:07.558738   94171 pod_ready.go:81] duration metric: took 8.5945ms for pod "coredns-7db6d8ff4d-9hc82" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:07.558747   94171 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jvvpr" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:07.558798   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jvvpr
	I0420 00:16:07.558807   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.558813   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.558817   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.566943   94171 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0420 00:16:07.568014   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:07.568030   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.568038   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.568043   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.571116   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:07.571834   94171 pod_ready.go:92] pod "coredns-7db6d8ff4d-jvvpr" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:07.571851   94171 pod_ready.go:81] duration metric: took 13.098517ms for pod "coredns-7db6d8ff4d-jvvpr" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:07.571860   94171 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:07.571921   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738
	I0420 00:16:07.571930   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.571937   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.571942   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.575504   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:07.576045   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:07.576058   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.576066   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.576069   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.578942   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:07.579568   94171 pod_ready.go:92] pod "etcd-ha-371738" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:07.579585   94171 pod_ready.go:81] duration metric: took 7.719539ms for pod "etcd-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:07.579593   94171 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:07.579649   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:07.579657   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.579663   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.579667   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.583027   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:07.583726   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:07.583740   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:07.583747   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:07.583751   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:07.586376   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:08.080395   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:08.080420   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:08.080428   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:08.080432   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:08.083902   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:08.084567   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:08.084585   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:08.084591   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:08.084594   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:08.087411   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:08.579816   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:08.579842   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:08.579849   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:08.579853   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:08.583538   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:08.584123   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:08.584139   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:08.584150   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:08.584155   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:08.587089   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:09.079861   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:09.079888   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:09.079895   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:09.079898   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:09.083662   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:09.084688   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:09.084704   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:09.084711   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:09.084716   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:09.087530   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:09.580509   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:09.580533   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:09.580541   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:09.580544   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:09.584943   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:09.585594   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:09.585610   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:09.585617   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:09.585621   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:09.588475   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:09.589151   94171 pod_ready.go:102] pod "etcd-ha-371738-m02" in "kube-system" namespace has status "Ready":"False"
	I0420 00:16:10.080454   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:10.080477   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:10.080485   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:10.080489   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:10.084366   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:10.085425   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:10.085439   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:10.085447   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:10.085456   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:10.088025   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:10.580769   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:10.580792   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:10.580801   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:10.580804   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:10.584972   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:10.586281   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:10.586294   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:10.586301   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:10.586305   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:10.589252   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:11.080491   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:11.080520   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:11.080532   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:11.080540   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:11.083840   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:11.084560   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:11.084576   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:11.084584   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:11.084588   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:11.087276   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:11.580006   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:11.580033   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:11.580046   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:11.580054   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:11.588962   94171 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0420 00:16:11.589636   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:11.589653   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:11.589661   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:11.589666   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:11.594181   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:11.594917   94171 pod_ready.go:102] pod "etcd-ha-371738-m02" in "kube-system" namespace has status "Ready":"False"
	I0420 00:16:12.080399   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:12.080426   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:12.080447   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:12.080452   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:12.082979   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:12.083841   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:12.083857   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:12.083863   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:12.083865   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:12.086264   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:12.580286   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:12.580309   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:12.580320   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:12.580325   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:12.584375   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:12.585372   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:12.585392   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:12.585401   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:12.585409   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:12.588051   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:13.080657   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:13.080682   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:13.080690   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:13.080694   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:13.084699   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:13.085544   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:13.085562   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:13.085569   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:13.085573   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:13.088305   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:13.579981   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:13.580006   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:13.580013   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:13.580017   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:13.583406   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:13.584270   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:13.584286   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:13.584296   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:13.584301   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:13.587678   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:14.080722   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:14.080745   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:14.080754   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:14.080757   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:14.084893   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:14.085577   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:14.085593   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:14.085603   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:14.085608   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:14.088897   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:14.089424   94171 pod_ready.go:102] pod "etcd-ha-371738-m02" in "kube-system" namespace has status "Ready":"False"
	I0420 00:16:14.579821   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:16:14.579850   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:14.579860   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:14.579864   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:14.585596   94171 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0420 00:16:14.586409   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:14.586427   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:14.586435   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:14.586439   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:14.589410   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:14.590076   94171 pod_ready.go:92] pod "etcd-ha-371738-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:14.590098   94171 pod_ready.go:81] duration metric: took 7.010496982s for pod "etcd-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:14.590130   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:14.590197   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738
	I0420 00:16:14.590208   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:14.590218   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:14.590225   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:14.592900   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:14.593681   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:14.593698   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:14.593708   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:14.593713   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:14.596210   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:14.597047   94171 pod_ready.go:92] pod "kube-apiserver-ha-371738" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:14.597069   94171 pod_ready.go:81] duration metric: took 6.926378ms for pod "kube-apiserver-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:14.597082   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:14.597143   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738-m02
	I0420 00:16:14.597154   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:14.597164   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:14.597173   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:14.599734   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:14.600476   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:14.600489   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:14.600495   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:14.600498   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:14.603031   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:15.098029   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738-m02
	I0420 00:16:15.098054   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:15.098061   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:15.098067   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:15.102974   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:15.104078   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:15.104095   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:15.104103   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:15.104106   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:15.107208   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:15.597830   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738-m02
	I0420 00:16:15.597855   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:15.597867   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:15.597872   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:15.601999   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:15.603984   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:15.603999   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:15.604007   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:15.604013   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:15.606658   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:16.097502   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738-m02
	I0420 00:16:16.097527   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.097535   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.097539   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.102180   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:16.103130   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:16.103153   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.103161   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.103165   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.106996   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:16.597927   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738-m02
	I0420 00:16:16.597953   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.597961   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.597965   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.602269   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:16.603505   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:16.603534   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.603545   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.603551   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.610277   94171 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0420 00:16:16.612863   94171 pod_ready.go:92] pod "kube-apiserver-ha-371738-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:16.612881   94171 pod_ready.go:81] duration metric: took 2.015792383s for pod "kube-apiserver-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:16.612892   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:16.612947   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-371738
	I0420 00:16:16.612954   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.612961   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.612964   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.620342   94171 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0420 00:16:16.620960   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:16.620975   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.620982   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.620985   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.623607   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:16.624210   94171 pod_ready.go:92] pod "kube-controller-manager-ha-371738" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:16.624225   94171 pod_ready.go:81] duration metric: took 11.32732ms for pod "kube-controller-manager-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:16.624234   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-59wls" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:16.624285   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-59wls
	I0420 00:16:16.624292   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.624299   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.624305   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.627444   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:16.628184   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:16.628195   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.628203   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.628208   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.630782   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:16:16.631249   94171 pod_ready.go:92] pod "kube-proxy-59wls" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:16.631264   94171 pod_ready.go:81] duration metric: took 7.02177ms for pod "kube-proxy-59wls" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:16.631271   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zw62l" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:16.631312   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw62l
	I0420 00:16:16.631317   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.631324   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.631327   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.640083   94171 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0420 00:16:16.728866   94171 request.go:629] Waited for 88.26916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:16.728945   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:16.728953   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.728964   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.728973   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.735095   94171 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0420 00:16:16.735892   94171 pod_ready.go:92] pod "kube-proxy-zw62l" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:16.735918   94171 pod_ready.go:81] duration metric: took 104.638962ms for pod "kube-proxy-zw62l" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:16.735932   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:16.928995   94171 request.go:629] Waited for 192.975571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738
	I0420 00:16:16.929089   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738
	I0420 00:16:16.929096   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:16.929112   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:16.929122   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:16.932863   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:17.129543   94171 request.go:629] Waited for 196.06387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:17.129615   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:16:17.129624   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:17.129643   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:17.129653   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:17.134397   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:17.135431   94171 pod_ready.go:92] pod "kube-scheduler-ha-371738" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:17.135449   94171 pod_ready.go:81] duration metric: took 399.509647ms for pod "kube-scheduler-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:17.135460   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:17.329650   94171 request.go:629] Waited for 194.095177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738-m02
	I0420 00:16:17.329720   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738-m02
	I0420 00:16:17.329727   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:17.329738   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:17.329746   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:17.333146   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:17.529289   94171 request.go:629] Waited for 195.373547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:17.529369   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:16:17.529380   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:17.529391   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:17.529409   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:17.537480   94171 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0420 00:16:17.538632   94171 pod_ready.go:92] pod "kube-scheduler-ha-371738-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 00:16:17.538656   94171 pod_ready.go:81] duration metric: took 403.188721ms for pod "kube-scheduler-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:16:17.538671   94171 pod_ready.go:38] duration metric: took 10.005521739s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 00:16:17.538693   94171 api_server.go:52] waiting for apiserver process to appear ...
	I0420 00:16:17.538762   94171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:16:17.558026   94171 api_server.go:72] duration metric: took 14.83152097s to wait for apiserver process to appear ...
	I0420 00:16:17.558054   94171 api_server.go:88] waiting for apiserver healthz status ...
	I0420 00:16:17.558078   94171 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0420 00:16:17.562889   94171 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0420 00:16:17.562975   94171 round_trippers.go:463] GET https://192.168.39.217:8443/version
	I0420 00:16:17.562988   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:17.563000   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:17.563011   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:17.564643   94171 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0420 00:16:17.564962   94171 api_server.go:141] control plane version: v1.30.0
	I0420 00:16:17.565007   94171 api_server.go:131] duration metric: took 6.943763ms to wait for apiserver health ...
	I0420 00:16:17.565018   94171 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 00:16:17.729433   94171 request.go:629] Waited for 164.318649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:16:17.729514   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:16:17.729521   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:17.729531   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:17.729539   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:17.736762   94171 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0420 00:16:17.742335   94171 system_pods.go:59] 17 kube-system pods found
	I0420 00:16:17.742367   94171 system_pods.go:61] "coredns-7db6d8ff4d-9hc82" [279d40d8-eb21-476c-ba36-bc7592777126] Running
	I0420 00:16:17.742372   94171 system_pods.go:61] "coredns-7db6d8ff4d-jvvpr" [104d5328-1f6a-4747-8e26-9a98e38dc1cc] Running
	I0420 00:16:17.742376   94171 system_pods.go:61] "etcd-ha-371738" [5e23c4a0-7c15-47b9-b722-82e61a10f286] Running
	I0420 00:16:17.742379   94171 system_pods.go:61] "etcd-ha-371738-m02" [712e8a6e-7007-4cf1-8a0c-4e33eeccebcd] Running
	I0420 00:16:17.742382   94171 system_pods.go:61] "kindnet-ggw7f" [2e0d1c1a-6fb4-4c3e-ae2b-41cfccaba2dd] Running
	I0420 00:16:17.742386   94171 system_pods.go:61] "kindnet-s87k2" [0820561f-f794-4ac5-8ce2-ae0cb4310c3e] Running
	I0420 00:16:17.742389   94171 system_pods.go:61] "kube-apiserver-ha-371738" [301ce02b-37b1-42ba-8a45-fbde327e2a02] Running
	I0420 00:16:17.742395   94171 system_pods.go:61] "kube-apiserver-ha-371738-m02" [a22f017a-e7b0-4748-9486-b52d35284584] Running
	I0420 00:16:17.742398   94171 system_pods.go:61] "kube-controller-manager-ha-371738" [bc03ed79-b024-46b1-af13-45a3def8bcae] Running
	I0420 00:16:17.742406   94171 system_pods.go:61] "kube-controller-manager-ha-371738-m02" [7b460bfb-bddf-46c0-a30c-f5e9757a32ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 00:16:17.742411   94171 system_pods.go:61] "kube-proxy-59wls" [722c6b7d-109b-4201-a5f1-c02a65befcf2] Running
	I0420 00:16:17.742415   94171 system_pods.go:61] "kube-proxy-zw62l" [dad72bfc-65c2-4007-9d5c-682ddf48c44d] Running
	I0420 00:16:17.742418   94171 system_pods.go:61] "kube-scheduler-ha-371738" [a3df56d3-c437-4ea9-b73d-2b22e93334b3] Running
	I0420 00:16:17.742422   94171 system_pods.go:61] "kube-scheduler-ha-371738-m02" [47dba6e4-cb4d-43e8-a173-06d13b08fd55] Running
	I0420 00:16:17.742425   94171 system_pods.go:61] "kube-vip-ha-371738" [8d162382-25bb-4393-8c45-a8487b571605] Running
	I0420 00:16:17.742428   94171 system_pods.go:61] "kube-vip-ha-371738-m02" [76331738-5bca-4724-939e-4c16a906e65b] Running
	I0420 00:16:17.742431   94171 system_pods.go:61] "storage-provisioner" [1d7b89d3-7cff-4258-8215-819971fa1b81] Running
	I0420 00:16:17.742440   94171 system_pods.go:74] duration metric: took 177.416016ms to wait for pod list to return data ...
	I0420 00:16:17.742448   94171 default_sa.go:34] waiting for default service account to be created ...
	I0420 00:16:17.929569   94171 request.go:629] Waited for 187.046792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0420 00:16:17.929628   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0420 00:16:17.929633   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:17.929640   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:17.929644   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:17.933818   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:16:17.934059   94171 default_sa.go:45] found service account: "default"
	I0420 00:16:17.934074   94171 default_sa.go:55] duration metric: took 191.619416ms for default service account to be created ...
	I0420 00:16:17.934083   94171 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 00:16:18.129599   94171 request.go:629] Waited for 195.448432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:16:18.129681   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:16:18.129687   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:18.129694   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:18.129698   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:18.139371   94171 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0420 00:16:18.145416   94171 system_pods.go:86] 17 kube-system pods found
	I0420 00:16:18.145442   94171 system_pods.go:89] "coredns-7db6d8ff4d-9hc82" [279d40d8-eb21-476c-ba36-bc7592777126] Running
	I0420 00:16:18.145447   94171 system_pods.go:89] "coredns-7db6d8ff4d-jvvpr" [104d5328-1f6a-4747-8e26-9a98e38dc1cc] Running
	I0420 00:16:18.145452   94171 system_pods.go:89] "etcd-ha-371738" [5e23c4a0-7c15-47b9-b722-82e61a10f286] Running
	I0420 00:16:18.145456   94171 system_pods.go:89] "etcd-ha-371738-m02" [712e8a6e-7007-4cf1-8a0c-4e33eeccebcd] Running
	I0420 00:16:18.145460   94171 system_pods.go:89] "kindnet-ggw7f" [2e0d1c1a-6fb4-4c3e-ae2b-41cfccaba2dd] Running
	I0420 00:16:18.145464   94171 system_pods.go:89] "kindnet-s87k2" [0820561f-f794-4ac5-8ce2-ae0cb4310c3e] Running
	I0420 00:16:18.145468   94171 system_pods.go:89] "kube-apiserver-ha-371738" [301ce02b-37b1-42ba-8a45-fbde327e2a02] Running
	I0420 00:16:18.145472   94171 system_pods.go:89] "kube-apiserver-ha-371738-m02" [a22f017a-e7b0-4748-9486-b52d35284584] Running
	I0420 00:16:18.145476   94171 system_pods.go:89] "kube-controller-manager-ha-371738" [bc03ed79-b024-46b1-af13-45a3def8bcae] Running
	I0420 00:16:18.145483   94171 system_pods.go:89] "kube-controller-manager-ha-371738-m02" [7b460bfb-bddf-46c0-a30c-f5e9757a32ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 00:16:18.145493   94171 system_pods.go:89] "kube-proxy-59wls" [722c6b7d-109b-4201-a5f1-c02a65befcf2] Running
	I0420 00:16:18.145498   94171 system_pods.go:89] "kube-proxy-zw62l" [dad72bfc-65c2-4007-9d5c-682ddf48c44d] Running
	I0420 00:16:18.145502   94171 system_pods.go:89] "kube-scheduler-ha-371738" [a3df56d3-c437-4ea9-b73d-2b22e93334b3] Running
	I0420 00:16:18.145506   94171 system_pods.go:89] "kube-scheduler-ha-371738-m02" [47dba6e4-cb4d-43e8-a173-06d13b08fd55] Running
	I0420 00:16:18.145513   94171 system_pods.go:89] "kube-vip-ha-371738" [8d162382-25bb-4393-8c45-a8487b571605] Running
	I0420 00:16:18.145516   94171 system_pods.go:89] "kube-vip-ha-371738-m02" [76331738-5bca-4724-939e-4c16a906e65b] Running
	I0420 00:16:18.145519   94171 system_pods.go:89] "storage-provisioner" [1d7b89d3-7cff-4258-8215-819971fa1b81] Running
	I0420 00:16:18.145527   94171 system_pods.go:126] duration metric: took 211.437795ms to wait for k8s-apps to be running ...
	I0420 00:16:18.145542   94171 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 00:16:18.145604   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:16:18.164996   94171 system_svc.go:56] duration metric: took 19.44571ms WaitForService to wait for kubelet
	I0420 00:16:18.165032   94171 kubeadm.go:576] duration metric: took 15.438532203s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:16:18.165056   94171 node_conditions.go:102] verifying NodePressure condition ...
	I0420 00:16:18.329498   94171 request.go:629] Waited for 164.361897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
	I0420 00:16:18.329578   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0420 00:16:18.329583   94171 round_trippers.go:469] Request Headers:
	I0420 00:16:18.329592   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:16:18.329596   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:16:18.333499   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:16:18.334982   94171 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 00:16:18.335011   94171 node_conditions.go:123] node cpu capacity is 2
	I0420 00:16:18.335026   94171 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 00:16:18.335031   94171 node_conditions.go:123] node cpu capacity is 2
	I0420 00:16:18.335036   94171 node_conditions.go:105] duration metric: took 169.973195ms to run NodePressure ...
	I0420 00:16:18.335051   94171 start.go:240] waiting for startup goroutines ...
	I0420 00:16:18.335087   94171 start.go:254] writing updated cluster config ...
	I0420 00:16:18.337370   94171 out.go:177] 
	I0420 00:16:18.338988   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:16:18.339079   94171 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:16:18.340830   94171 out.go:177] * Starting "ha-371738-m03" control-plane node in "ha-371738" cluster
	I0420 00:16:18.342061   94171 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:16:18.342091   94171 cache.go:56] Caching tarball of preloaded images
	I0420 00:16:18.342186   94171 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 00:16:18.342197   94171 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 00:16:18.342283   94171 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:16:18.342449   94171 start.go:360] acquireMachinesLock for ha-371738-m03: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 00:16:18.342494   94171 start.go:364] duration metric: took 25.993µs to acquireMachinesLock for "ha-371738-m03"
	I0420 00:16:18.342509   94171 start.go:93] Provisioning new machine with config: &{Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:defau
lt APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:16:18.342597   94171 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0420 00:16:18.344180   94171 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0420 00:16:18.344259   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:16:18.344295   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:16:18.360912   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I0420 00:16:18.361559   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:16:18.362122   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:16:18.362150   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:16:18.362512   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:16:18.362706   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetMachineName
	I0420 00:16:18.362883   94171 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:16:18.363061   94171 start.go:159] libmachine.API.Create for "ha-371738" (driver="kvm2")
	I0420 00:16:18.363095   94171 client.go:168] LocalClient.Create starting
	I0420 00:16:18.363134   94171 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem
	I0420 00:16:18.363174   94171 main.go:141] libmachine: Decoding PEM data...
	I0420 00:16:18.363192   94171 main.go:141] libmachine: Parsing certificate...
	I0420 00:16:18.363260   94171 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem
	I0420 00:16:18.363290   94171 main.go:141] libmachine: Decoding PEM data...
	I0420 00:16:18.363307   94171 main.go:141] libmachine: Parsing certificate...
	I0420 00:16:18.363334   94171 main.go:141] libmachine: Running pre-create checks...
	I0420 00:16:18.363346   94171 main.go:141] libmachine: (ha-371738-m03) Calling .PreCreateCheck
	I0420 00:16:18.363530   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetConfigRaw
	I0420 00:16:18.363986   94171 main.go:141] libmachine: Creating machine...
	I0420 00:16:18.364003   94171 main.go:141] libmachine: (ha-371738-m03) Calling .Create
	I0420 00:16:18.364173   94171 main.go:141] libmachine: (ha-371738-m03) Creating KVM machine...
	I0420 00:16:18.365642   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found existing default KVM network
	I0420 00:16:18.365776   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found existing private KVM network mk-ha-371738
	I0420 00:16:18.365934   94171 main.go:141] libmachine: (ha-371738-m03) Setting up store path in /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03 ...
	I0420 00:16:18.365961   94171 main.go:141] libmachine: (ha-371738-m03) Building disk image from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0420 00:16:18.366012   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:18.365909   94971 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:16:18.366110   94171 main.go:141] libmachine: (ha-371738-m03) Downloading /home/jenkins/minikube-integration/18703-76456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0420 00:16:18.596347   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:18.596218   94971 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa...
	I0420 00:16:18.690070   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:18.689924   94971 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/ha-371738-m03.rawdisk...
	I0420 00:16:18.690110   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Writing magic tar header
	I0420 00:16:18.690125   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Writing SSH key tar header
	I0420 00:16:18.690138   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:18.690078   94971 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03 ...
	I0420 00:16:18.690246   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03
	I0420 00:16:18.690268   94171 main.go:141] libmachine: (ha-371738-m03) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03 (perms=drwx------)
	I0420 00:16:18.690276   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines
	I0420 00:16:18.690287   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:16:18.690296   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456
	I0420 00:16:18.690304   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0420 00:16:18.690312   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Checking permissions on dir: /home/jenkins
	I0420 00:16:18.690318   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Checking permissions on dir: /home
	I0420 00:16:18.690326   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Skipping /home - not owner
	I0420 00:16:18.690336   94171 main.go:141] libmachine: (ha-371738-m03) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines (perms=drwxr-xr-x)
	I0420 00:16:18.690351   94171 main.go:141] libmachine: (ha-371738-m03) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube (perms=drwxr-xr-x)
	I0420 00:16:18.690369   94171 main.go:141] libmachine: (ha-371738-m03) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456 (perms=drwxrwxr-x)
	I0420 00:16:18.690382   94171 main.go:141] libmachine: (ha-371738-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0420 00:16:18.690396   94171 main.go:141] libmachine: (ha-371738-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0420 00:16:18.690404   94171 main.go:141] libmachine: (ha-371738-m03) Creating domain...
	I0420 00:16:18.691283   94171 main.go:141] libmachine: (ha-371738-m03) define libvirt domain using xml: 
	I0420 00:16:18.691302   94171 main.go:141] libmachine: (ha-371738-m03) <domain type='kvm'>
	I0420 00:16:18.691309   94171 main.go:141] libmachine: (ha-371738-m03)   <name>ha-371738-m03</name>
	I0420 00:16:18.691318   94171 main.go:141] libmachine: (ha-371738-m03)   <memory unit='MiB'>2200</memory>
	I0420 00:16:18.691324   94171 main.go:141] libmachine: (ha-371738-m03)   <vcpu>2</vcpu>
	I0420 00:16:18.691329   94171 main.go:141] libmachine: (ha-371738-m03)   <features>
	I0420 00:16:18.691334   94171 main.go:141] libmachine: (ha-371738-m03)     <acpi/>
	I0420 00:16:18.691341   94171 main.go:141] libmachine: (ha-371738-m03)     <apic/>
	I0420 00:16:18.691346   94171 main.go:141] libmachine: (ha-371738-m03)     <pae/>
	I0420 00:16:18.691352   94171 main.go:141] libmachine: (ha-371738-m03)     
	I0420 00:16:18.691358   94171 main.go:141] libmachine: (ha-371738-m03)   </features>
	I0420 00:16:18.691365   94171 main.go:141] libmachine: (ha-371738-m03)   <cpu mode='host-passthrough'>
	I0420 00:16:18.691373   94171 main.go:141] libmachine: (ha-371738-m03)   
	I0420 00:16:18.691377   94171 main.go:141] libmachine: (ha-371738-m03)   </cpu>
	I0420 00:16:18.691383   94171 main.go:141] libmachine: (ha-371738-m03)   <os>
	I0420 00:16:18.691397   94171 main.go:141] libmachine: (ha-371738-m03)     <type>hvm</type>
	I0420 00:16:18.691407   94171 main.go:141] libmachine: (ha-371738-m03)     <boot dev='cdrom'/>
	I0420 00:16:18.691439   94171 main.go:141] libmachine: (ha-371738-m03)     <boot dev='hd'/>
	I0420 00:16:18.691459   94171 main.go:141] libmachine: (ha-371738-m03)     <bootmenu enable='no'/>
	I0420 00:16:18.691468   94171 main.go:141] libmachine: (ha-371738-m03)   </os>
	I0420 00:16:18.691474   94171 main.go:141] libmachine: (ha-371738-m03)   <devices>
	I0420 00:16:18.691493   94171 main.go:141] libmachine: (ha-371738-m03)     <disk type='file' device='cdrom'>
	I0420 00:16:18.691516   94171 main.go:141] libmachine: (ha-371738-m03)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/boot2docker.iso'/>
	I0420 00:16:18.691554   94171 main.go:141] libmachine: (ha-371738-m03)       <target dev='hdc' bus='scsi'/>
	I0420 00:16:18.691585   94171 main.go:141] libmachine: (ha-371738-m03)       <readonly/>
	I0420 00:16:18.691595   94171 main.go:141] libmachine: (ha-371738-m03)     </disk>
	I0420 00:16:18.691604   94171 main.go:141] libmachine: (ha-371738-m03)     <disk type='file' device='disk'>
	I0420 00:16:18.691616   94171 main.go:141] libmachine: (ha-371738-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0420 00:16:18.691632   94171 main.go:141] libmachine: (ha-371738-m03)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/ha-371738-m03.rawdisk'/>
	I0420 00:16:18.691645   94171 main.go:141] libmachine: (ha-371738-m03)       <target dev='hda' bus='virtio'/>
	I0420 00:16:18.691657   94171 main.go:141] libmachine: (ha-371738-m03)     </disk>
	I0420 00:16:18.691669   94171 main.go:141] libmachine: (ha-371738-m03)     <interface type='network'>
	I0420 00:16:18.691679   94171 main.go:141] libmachine: (ha-371738-m03)       <source network='mk-ha-371738'/>
	I0420 00:16:18.691684   94171 main.go:141] libmachine: (ha-371738-m03)       <model type='virtio'/>
	I0420 00:16:18.691692   94171 main.go:141] libmachine: (ha-371738-m03)     </interface>
	I0420 00:16:18.691697   94171 main.go:141] libmachine: (ha-371738-m03)     <interface type='network'>
	I0420 00:16:18.691709   94171 main.go:141] libmachine: (ha-371738-m03)       <source network='default'/>
	I0420 00:16:18.691717   94171 main.go:141] libmachine: (ha-371738-m03)       <model type='virtio'/>
	I0420 00:16:18.691721   94171 main.go:141] libmachine: (ha-371738-m03)     </interface>
	I0420 00:16:18.691727   94171 main.go:141] libmachine: (ha-371738-m03)     <serial type='pty'>
	I0420 00:16:18.691734   94171 main.go:141] libmachine: (ha-371738-m03)       <target port='0'/>
	I0420 00:16:18.691739   94171 main.go:141] libmachine: (ha-371738-m03)     </serial>
	I0420 00:16:18.691748   94171 main.go:141] libmachine: (ha-371738-m03)     <console type='pty'>
	I0420 00:16:18.691753   94171 main.go:141] libmachine: (ha-371738-m03)       <target type='serial' port='0'/>
	I0420 00:16:18.691760   94171 main.go:141] libmachine: (ha-371738-m03)     </console>
	I0420 00:16:18.691766   94171 main.go:141] libmachine: (ha-371738-m03)     <rng model='virtio'>
	I0420 00:16:18.691775   94171 main.go:141] libmachine: (ha-371738-m03)       <backend model='random'>/dev/random</backend>
	I0420 00:16:18.691780   94171 main.go:141] libmachine: (ha-371738-m03)     </rng>
	I0420 00:16:18.691787   94171 main.go:141] libmachine: (ha-371738-m03)     
	I0420 00:16:18.691791   94171 main.go:141] libmachine: (ha-371738-m03)     
	I0420 00:16:18.691796   94171 main.go:141] libmachine: (ha-371738-m03)   </devices>
	I0420 00:16:18.691801   94171 main.go:141] libmachine: (ha-371738-m03) </domain>
	I0420 00:16:18.691808   94171 main.go:141] libmachine: (ha-371738-m03) 
	I0420 00:16:18.698609   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:9a:ab:84 in network default
	I0420 00:16:18.699178   94171 main.go:141] libmachine: (ha-371738-m03) Ensuring networks are active...
	I0420 00:16:18.699212   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:18.700006   94171 main.go:141] libmachine: (ha-371738-m03) Ensuring network default is active
	I0420 00:16:18.700336   94171 main.go:141] libmachine: (ha-371738-m03) Ensuring network mk-ha-371738 is active
	I0420 00:16:18.700677   94171 main.go:141] libmachine: (ha-371738-m03) Getting domain xml...
	I0420 00:16:18.701358   94171 main.go:141] libmachine: (ha-371738-m03) Creating domain...
	I0420 00:16:19.935037   94171 main.go:141] libmachine: (ha-371738-m03) Waiting to get IP...
	I0420 00:16:19.935860   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:19.936363   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:19.936391   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:19.936337   94971 retry.go:31] will retry after 252.638179ms: waiting for machine to come up
	I0420 00:16:20.190786   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:20.191318   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:20.191350   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:20.191278   94971 retry.go:31] will retry after 315.019844ms: waiting for machine to come up
	I0420 00:16:20.507924   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:20.508352   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:20.508386   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:20.508281   94971 retry.go:31] will retry after 394.142198ms: waiting for machine to come up
	I0420 00:16:20.903536   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:20.904177   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:20.904215   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:20.904133   94971 retry.go:31] will retry after 508.732448ms: waiting for machine to come up
	I0420 00:16:21.414506   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:21.415012   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:21.415046   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:21.414949   94971 retry.go:31] will retry after 668.372993ms: waiting for machine to come up
	I0420 00:16:22.084735   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:22.085283   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:22.085305   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:22.085242   94971 retry.go:31] will retry after 684.969185ms: waiting for machine to come up
	I0420 00:16:22.771773   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:22.772407   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:22.772438   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:22.772356   94971 retry.go:31] will retry after 829.690915ms: waiting for machine to come up
	I0420 00:16:23.603601   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:23.604083   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:23.604111   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:23.604023   94971 retry.go:31] will retry after 1.241006066s: waiting for machine to come up
	I0420 00:16:24.846365   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:24.846812   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:24.846834   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:24.846780   94971 retry.go:31] will retry after 1.636439727s: waiting for machine to come up
	I0420 00:16:26.485446   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:26.485860   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:26.485901   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:26.485809   94971 retry.go:31] will retry after 2.040758446s: waiting for machine to come up
	I0420 00:16:28.528569   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:28.529195   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:28.529226   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:28.529141   94971 retry.go:31] will retry after 2.173228331s: waiting for machine to come up
	I0420 00:16:30.704551   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:30.705015   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:30.705045   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:30.704968   94971 retry.go:31] will retry after 2.195131281s: waiting for machine to come up
	I0420 00:16:32.902260   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:32.902681   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:32.902706   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:32.902637   94971 retry.go:31] will retry after 4.511428582s: waiting for machine to come up
	I0420 00:16:37.418440   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:37.418811   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find current IP address of domain ha-371738-m03 in network mk-ha-371738
	I0420 00:16:37.418847   94171 main.go:141] libmachine: (ha-371738-m03) DBG | I0420 00:16:37.418754   94971 retry.go:31] will retry after 5.620123819s: waiting for machine to come up
	I0420 00:16:43.043791   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.044254   94171 main.go:141] libmachine: (ha-371738-m03) Found IP for machine: 192.168.39.253
	I0420 00:16:43.044277   94171 main.go:141] libmachine: (ha-371738-m03) Reserving static IP address...
	I0420 00:16:43.044292   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has current primary IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.044837   94171 main.go:141] libmachine: (ha-371738-m03) DBG | unable to find host DHCP lease matching {name: "ha-371738-m03", mac: "52:54:00:cc:e5:aa", ip: "192.168.39.253"} in network mk-ha-371738
	I0420 00:16:43.117958   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Getting to WaitForSSH function...
	I0420 00:16:43.117991   94171 main.go:141] libmachine: (ha-371738-m03) Reserved static IP address: 192.168.39.253
	I0420 00:16:43.118006   94171 main.go:141] libmachine: (ha-371738-m03) Waiting for SSH to be available...
	I0420 00:16:43.120733   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.121205   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:43.121235   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.121411   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Using SSH client type: external
	I0420 00:16:43.121442   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa (-rw-------)
	I0420 00:16:43.121473   94171 main.go:141] libmachine: (ha-371738-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 00:16:43.121493   94171 main.go:141] libmachine: (ha-371738-m03) DBG | About to run SSH command:
	I0420 00:16:43.121514   94171 main.go:141] libmachine: (ha-371738-m03) DBG | exit 0
	I0420 00:16:43.250207   94171 main.go:141] libmachine: (ha-371738-m03) DBG | SSH cmd err, output: <nil>: 
	I0420 00:16:43.250498   94171 main.go:141] libmachine: (ha-371738-m03) KVM machine creation complete!
	I0420 00:16:43.250794   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetConfigRaw
	I0420 00:16:43.251384   94171 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:16:43.251599   94171 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:16:43.251771   94171 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0420 00:16:43.251790   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetState
	I0420 00:16:43.253233   94171 main.go:141] libmachine: Detecting operating system of created instance...
	I0420 00:16:43.253251   94171 main.go:141] libmachine: Waiting for SSH to be available...
	I0420 00:16:43.253260   94171 main.go:141] libmachine: Getting to WaitForSSH function...
	I0420 00:16:43.253273   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:43.255679   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.256015   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:43.256049   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.256210   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:43.256409   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:43.256620   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:43.256760   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:43.256895   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:16:43.257137   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0420 00:16:43.257154   94171 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0420 00:16:43.356613   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:16:43.356633   94171 main.go:141] libmachine: Detecting the provisioner...
	I0420 00:16:43.356641   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:43.360590   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.361070   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:43.361104   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.361245   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:43.361479   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:43.361675   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:43.361828   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:43.362065   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:16:43.362272   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0420 00:16:43.362290   94171 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0420 00:16:43.463052   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0420 00:16:43.463118   94171 main.go:141] libmachine: found compatible host: buildroot
	I0420 00:16:43.463125   94171 main.go:141] libmachine: Provisioning with buildroot...
	I0420 00:16:43.463132   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetMachineName
	I0420 00:16:43.463434   94171 buildroot.go:166] provisioning hostname "ha-371738-m03"
	I0420 00:16:43.463458   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetMachineName
	I0420 00:16:43.463668   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:43.466501   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.466893   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:43.466924   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.467103   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:43.467289   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:43.467484   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:43.467645   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:43.467857   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:16:43.468061   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0420 00:16:43.468084   94171 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-371738-m03 && echo "ha-371738-m03" | sudo tee /etc/hostname
	I0420 00:16:43.591113   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-371738-m03
	
	I0420 00:16:43.591144   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:43.593966   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.594333   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:43.594366   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.594535   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:43.594715   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:43.594933   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:43.595134   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:43.595361   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:16:43.595521   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0420 00:16:43.595537   94171 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-371738-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-371738-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-371738-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 00:16:43.707667   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:16:43.707701   94171 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 00:16:43.707721   94171 buildroot.go:174] setting up certificates
	I0420 00:16:43.707734   94171 provision.go:84] configureAuth start
	I0420 00:16:43.707747   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetMachineName
	I0420 00:16:43.708065   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetIP
	I0420 00:16:43.710910   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.711340   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:43.711369   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.711533   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:43.713969   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.714365   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:43.714391   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:43.714545   94171 provision.go:143] copyHostCerts
	I0420 00:16:43.714580   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:16:43.714629   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 00:16:43.714638   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:16:43.714715   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 00:16:43.714816   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:16:43.714841   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 00:16:43.714847   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:16:43.714885   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 00:16:43.714947   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:16:43.714970   94171 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 00:16:43.714980   94171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:16:43.715010   94171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 00:16:43.715078   94171 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.ha-371738-m03 san=[127.0.0.1 192.168.39.253 ha-371738-m03 localhost minikube]
	I0420 00:16:44.053765   94171 provision.go:177] copyRemoteCerts
	I0420 00:16:44.053828   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 00:16:44.053856   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:44.056720   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.057090   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.057127   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.057288   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:44.057544   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:44.057702   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:44.057881   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:16:44.145602   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0420 00:16:44.145673   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 00:16:44.173227   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0420 00:16:44.173299   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0420 00:16:44.200252   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0420 00:16:44.200319   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 00:16:44.227652   94171 provision.go:87] duration metric: took 519.904112ms to configureAuth
	I0420 00:16:44.227683   94171 buildroot.go:189] setting minikube options for container-runtime
	I0420 00:16:44.227875   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:16:44.227956   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:44.230922   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.231348   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.231376   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.231563   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:44.231771   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:44.231958   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:44.232122   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:44.232283   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:16:44.232458   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0420 00:16:44.232475   94171 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 00:16:44.505879   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 00:16:44.505914   94171 main.go:141] libmachine: Checking connection to Docker...
	I0420 00:16:44.505925   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetURL
	I0420 00:16:44.507319   94171 main.go:141] libmachine: (ha-371738-m03) DBG | Using libvirt version 6000000
	I0420 00:16:44.509562   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.509911   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.509940   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.510151   94171 main.go:141] libmachine: Docker is up and running!
	I0420 00:16:44.510168   94171 main.go:141] libmachine: Reticulating splines...
	I0420 00:16:44.510176   94171 client.go:171] duration metric: took 26.147070641s to LocalClient.Create
	I0420 00:16:44.510196   94171 start.go:167] duration metric: took 26.147135792s to libmachine.API.Create "ha-371738"
	I0420 00:16:44.510206   94171 start.go:293] postStartSetup for "ha-371738-m03" (driver="kvm2")
	I0420 00:16:44.510215   94171 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 00:16:44.510242   94171 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:16:44.510460   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 00:16:44.510486   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:44.512596   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.512922   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.512952   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.513048   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:44.513227   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:44.513394   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:44.513525   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:16:44.593717   94171 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 00:16:44.598653   94171 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 00:16:44.598679   94171 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 00:16:44.598752   94171 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 00:16:44.598845   94171 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 00:16:44.598857   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /etc/ssl/certs/837422.pem
	I0420 00:16:44.598955   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 00:16:44.610662   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:16:44.636626   94171 start.go:296] duration metric: took 126.40507ms for postStartSetup
	I0420 00:16:44.636684   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetConfigRaw
	I0420 00:16:44.637394   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetIP
	I0420 00:16:44.640744   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.641145   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.641167   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.641457   94171 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:16:44.641685   94171 start.go:128] duration metric: took 26.2990747s to createHost
	I0420 00:16:44.641717   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:44.644013   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.644493   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.644517   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.644655   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:44.644865   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:44.645038   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:44.645189   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:44.645379   94171 main.go:141] libmachine: Using SSH client type: native
	I0420 00:16:44.645532   94171 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0420 00:16:44.645543   94171 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 00:16:44.751198   94171 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713572204.724524131
	
	I0420 00:16:44.751221   94171 fix.go:216] guest clock: 1713572204.724524131
	I0420 00:16:44.751231   94171 fix.go:229] Guest: 2024-04-20 00:16:44.724524131 +0000 UTC Remote: 2024-04-20 00:16:44.641701819 +0000 UTC m=+154.453413482 (delta=82.822312ms)
	I0420 00:16:44.751253   94171 fix.go:200] guest clock delta is within tolerance: 82.822312ms
	I0420 00:16:44.751260   94171 start.go:83] releasing machines lock for "ha-371738-m03", held for 26.408759008s
	I0420 00:16:44.751282   94171 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:16:44.751568   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetIP
	I0420 00:16:44.753935   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.754331   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.754361   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.756720   94171 out.go:177] * Found network options:
	I0420 00:16:44.757982   94171 out.go:177]   - NO_PROXY=192.168.39.217,192.168.39.48
	W0420 00:16:44.759243   94171 proxy.go:119] fail to check proxy env: Error ip not in block
	W0420 00:16:44.759265   94171 proxy.go:119] fail to check proxy env: Error ip not in block
	I0420 00:16:44.759277   94171 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:16:44.759767   94171 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:16:44.759976   94171 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:16:44.760084   94171 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 00:16:44.760141   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	W0420 00:16:44.760214   94171 proxy.go:119] fail to check proxy env: Error ip not in block
	W0420 00:16:44.760242   94171 proxy.go:119] fail to check proxy env: Error ip not in block
	I0420 00:16:44.760342   94171 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 00:16:44.760368   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:16:44.763019   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.763210   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.763428   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.763453   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.763597   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:44.763757   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:44.763763   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:44.763781   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:44.763965   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:44.763989   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:16:44.764165   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:16:44.764191   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:16:44.764304   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:16:44.764520   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:16:45.000306   94171 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 00:16:45.008157   94171 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 00:16:45.008266   94171 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 00:16:45.027279   94171 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 00:16:45.027307   94171 start.go:494] detecting cgroup driver to use...
	I0420 00:16:45.027381   94171 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 00:16:45.044536   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 00:16:45.059602   94171 docker.go:217] disabling cri-docker service (if available) ...
	I0420 00:16:45.059655   94171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 00:16:45.074069   94171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 00:16:45.088108   94171 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 00:16:45.215697   94171 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 00:16:45.378103   94171 docker.go:233] disabling docker service ...
	I0420 00:16:45.378185   94171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 00:16:45.395365   94171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 00:16:45.409169   94171 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 00:16:45.557032   94171 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 00:16:45.690417   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 00:16:45.707696   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 00:16:45.729031   94171 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 00:16:45.729091   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:16:45.741885   94171 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 00:16:45.741960   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:16:45.753900   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:16:45.765056   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:16:45.778946   94171 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 00:16:45.792094   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:16:45.804565   94171 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:16:45.824863   94171 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:16:45.836644   94171 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 00:16:45.847812   94171 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 00:16:45.847875   94171 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 00:16:45.863416   94171 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 00:16:45.873866   94171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:16:46.009171   94171 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 00:16:46.159075   94171 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 00:16:46.159146   94171 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 00:16:46.164804   94171 start.go:562] Will wait 60s for crictl version
	I0420 00:16:46.164851   94171 ssh_runner.go:195] Run: which crictl
	I0420 00:16:46.169388   94171 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 00:16:46.208706   94171 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 00:16:46.208793   94171 ssh_runner.go:195] Run: crio --version
	I0420 00:16:46.242307   94171 ssh_runner.go:195] Run: crio --version
	I0420 00:16:46.274164   94171 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 00:16:46.275633   94171 out.go:177]   - env NO_PROXY=192.168.39.217
	I0420 00:16:46.276957   94171 out.go:177]   - env NO_PROXY=192.168.39.217,192.168.39.48
	I0420 00:16:46.278171   94171 main.go:141] libmachine: (ha-371738-m03) Calling .GetIP
	I0420 00:16:46.280769   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:46.281128   94171 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:16:46.281150   94171 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:16:46.281401   94171 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0420 00:16:46.285943   94171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:16:46.300171   94171 mustload.go:65] Loading cluster: ha-371738
	I0420 00:16:46.300379   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:16:46.300630   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:16:46.300668   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:16:46.315016   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36657
	I0420 00:16:46.315401   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:16:46.315881   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:16:46.315907   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:16:46.316223   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:16:46.316431   94171 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:16:46.318071   94171 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:16:46.318335   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:16:46.318367   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:16:46.332056   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44045
	I0420 00:16:46.332425   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:16:46.332843   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:16:46.332864   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:16:46.333148   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:16:46.333356   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:16:46.333529   94171 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738 for IP: 192.168.39.253
	I0420 00:16:46.333548   94171 certs.go:194] generating shared ca certs ...
	I0420 00:16:46.333566   94171 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:16:46.333716   94171 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 00:16:46.333763   94171 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 00:16:46.333777   94171 certs.go:256] generating profile certs ...
	I0420 00:16:46.333870   94171 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key
	I0420 00:16:46.333902   94171 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.441f7660
	I0420 00:16:46.333921   94171 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.441f7660 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.48 192.168.39.253 192.168.39.254]
	I0420 00:16:46.571466   94171 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.441f7660 ...
	I0420 00:16:46.571502   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.441f7660: {Name:mk5163288e441a9f3612764637090483eba4cfc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:16:46.571738   94171 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.441f7660 ...
	I0420 00:16:46.571765   94171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.441f7660: {Name:mk7b6be0777ba3300f48a9e2cc1b97a759a2b430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:16:46.571878   94171 certs.go:381] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.441f7660 -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt
	I0420 00:16:46.572024   94171 certs.go:385] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.441f7660 -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key
	I0420 00:16:46.572171   94171 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key
	I0420 00:16:46.572190   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0420 00:16:46.572204   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0420 00:16:46.572219   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0420 00:16:46.572235   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0420 00:16:46.572254   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0420 00:16:46.572271   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0420 00:16:46.572286   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0420 00:16:46.572299   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0420 00:16:46.572347   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 00:16:46.572377   94171 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 00:16:46.572388   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 00:16:46.572410   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 00:16:46.572441   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 00:16:46.572462   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 00:16:46.572519   94171 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:16:46.572567   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:16:46.572595   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem -> /usr/share/ca-certificates/83742.pem
	I0420 00:16:46.572616   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /usr/share/ca-certificates/837422.pem
	I0420 00:16:46.572666   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:16:46.575877   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:16:46.576324   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:16:46.576350   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:16:46.576531   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:16:46.576716   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:16:46.576910   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:16:46.577055   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:16:46.657596   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0420 00:16:46.664152   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0420 00:16:46.681082   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0420 00:16:46.687658   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0420 00:16:46.700993   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0420 00:16:46.706697   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0420 00:16:46.719046   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0420 00:16:46.724454   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0420 00:16:46.738960   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0420 00:16:46.744072   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0420 00:16:46.758038   94171 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0420 00:16:46.763486   94171 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0420 00:16:46.776597   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 00:16:46.806861   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 00:16:46.834440   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 00:16:46.860808   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 00:16:46.886262   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0420 00:16:46.912856   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0420 00:16:46.938598   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 00:16:46.963855   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 00:16:46.991253   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 00:16:47.018911   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 00:16:47.045377   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 00:16:47.075192   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0420 00:16:47.094910   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0420 00:16:47.114705   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0420 00:16:47.134134   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0420 00:16:47.154366   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0420 00:16:47.174174   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0420 00:16:47.193905   94171 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0420 00:16:47.212202   94171 ssh_runner.go:195] Run: openssl version
	I0420 00:16:47.218244   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 00:16:47.230302   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 00:16:47.234877   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 00:16:47.234916   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 00:16:47.240935   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 00:16:47.253253   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 00:16:47.265076   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:16:47.269789   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:16:47.269828   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:16:47.275781   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 00:16:47.288687   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 00:16:47.301056   94171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 00:16:47.306160   94171 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 00:16:47.306218   94171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 00:16:47.312169   94171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 00:16:47.324165   94171 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 00:16:47.328487   94171 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0420 00:16:47.328544   94171 kubeadm.go:928] updating node {m03 192.168.39.253 8443 v1.30.0 crio true true} ...
	I0420 00:16:47.328643   94171 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-371738-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 00:16:47.328671   94171 kube-vip.go:111] generating kube-vip config ...
	I0420 00:16:47.328705   94171 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0420 00:16:47.348476   94171 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0420 00:16:47.348546   94171 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0420 00:16:47.348613   94171 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 00:16:47.359896   94171 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0420 00:16:47.359953   94171 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0420 00:16:47.370514   94171 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0420 00:16:47.370541   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0420 00:16:47.370601   94171 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0420 00:16:47.370650   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:16:47.370601   94171 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0420 00:16:47.370725   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0420 00:16:47.370767   94171 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0420 00:16:47.370606   94171 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0420 00:16:47.387017   94171 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0420 00:16:47.387035   94171 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0420 00:16:47.387053   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0420 00:16:47.387084   94171 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0420 00:16:47.387113   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0420 00:16:47.387094   94171 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0420 00:16:47.424078   94171 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0420 00:16:47.424121   94171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0420 00:16:48.380427   94171 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0420 00:16:48.390505   94171 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0420 00:16:48.409744   94171 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 00:16:48.428990   94171 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0420 00:16:48.448874   94171 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0420 00:16:48.453592   94171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:16:48.466689   94171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:16:48.593429   94171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:16:48.613993   94171 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:16:48.614349   94171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:16:48.614395   94171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:16:48.630644   94171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34149
	I0420 00:16:48.631092   94171 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:16:48.631551   94171 main.go:141] libmachine: Using API Version  1
	I0420 00:16:48.631574   94171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:16:48.632004   94171 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:16:48.632250   94171 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:16:48.632443   94171 start.go:316] joinCluster: &{Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.16
8.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:fa
lse kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:16:48.632592   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0420 00:16:48.632627   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:16:48.635807   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:16:48.636274   94171 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:16:48.636311   94171 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:16:48.636490   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:16:48.636674   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:16:48.636846   94171 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:16:48.636988   94171 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:16:48.812312   94171 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:16:48.812382   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pf094g.f6vyxymxfplz8bcz --discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-371738-m03 --control-plane --apiserver-advertise-address=192.168.39.253 --apiserver-bind-port=8443"
	I0420 00:17:14.043770   94171 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pf094g.f6vyxymxfplz8bcz --discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-371738-m03 --control-plane --apiserver-advertise-address=192.168.39.253 --apiserver-bind-port=8443": (25.231352321s)
	I0420 00:17:14.043833   94171 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0420 00:17:14.522012   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-371738-m03 minikube.k8s.io/updated_at=2024_04_20T00_17_14_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=ha-371738 minikube.k8s.io/primary=false
	I0420 00:17:14.653111   94171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-371738-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0420 00:17:14.795721   94171 start.go:318] duration metric: took 26.163270633s to joinCluster
	I0420 00:17:14.795813   94171 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:17:14.797711   94171 out.go:177] * Verifying Kubernetes components...
	I0420 00:17:14.796145   94171 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:17:14.799494   94171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:17:15.074891   94171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:17:15.148253   94171 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 00:17:15.148627   94171 kapi.go:59] client config for ha-371738: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.crt", KeyFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key", CAFile:"/home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0420 00:17:15.148716   94171 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.217:8443
	I0420 00:17:15.149022   94171 node_ready.go:35] waiting up to 6m0s for node "ha-371738-m03" to be "Ready" ...
	I0420 00:17:15.149116   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:15.149127   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:15.149135   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:15.149144   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:15.153213   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:15.649865   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:15.649887   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:15.649895   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:15.649900   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:15.653392   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:16.150122   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:16.150151   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:16.150163   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:16.150169   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:16.154626   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:16.649608   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:16.649639   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:16.649650   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:16.649655   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:16.654169   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:17.149437   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:17.149467   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:17.149478   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:17.149485   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:17.152912   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:17.153692   94171 node_ready.go:53] node "ha-371738-m03" has status "Ready":"False"
	I0420 00:17:17.650093   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:17.650120   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:17.650131   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:17.650138   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:17.653930   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:18.149848   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:18.149875   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:18.149886   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:18.149892   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:18.153841   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:18.649637   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:18.649665   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:18.649675   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:18.649679   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:18.653420   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:19.149337   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:19.149360   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.149368   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.149373   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.153180   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:19.154190   94171 node_ready.go:49] node "ha-371738-m03" has status "Ready":"True"
	I0420 00:17:19.154213   94171 node_ready.go:38] duration metric: took 4.005171084s for node "ha-371738-m03" to be "Ready" ...
	I0420 00:17:19.154225   94171 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 00:17:19.154295   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:17:19.154309   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.154320   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.154328   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.161220   94171 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0420 00:17:19.170221   94171 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9hc82" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.170306   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9hc82
	I0420 00:17:19.170318   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.170325   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.170329   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.174058   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:19.174629   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:19.174647   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.174656   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.174661   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.177880   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:19.178568   94171 pod_ready.go:92] pod "coredns-7db6d8ff4d-9hc82" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:19.178599   94171 pod_ready.go:81] duration metric: took 8.345138ms for pod "coredns-7db6d8ff4d-9hc82" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.178616   94171 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jvvpr" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.178699   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jvvpr
	I0420 00:17:19.178710   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.178720   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.178727   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.181883   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:19.182845   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:19.182867   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.182878   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.182884   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.185713   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:19.186730   94171 pod_ready.go:92] pod "coredns-7db6d8ff4d-jvvpr" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:19.186745   94171 pod_ready.go:81] duration metric: took 8.121891ms for pod "coredns-7db6d8ff4d-jvvpr" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.186758   94171 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.186810   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738
	I0420 00:17:19.186819   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.186826   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.186832   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.191012   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:19.193243   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:19.193259   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.193266   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.193270   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.195909   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:19.196534   94171 pod_ready.go:92] pod "etcd-ha-371738" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:19.196551   94171 pod_ready.go:81] duration metric: took 9.786532ms for pod "etcd-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.196561   94171 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.196627   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m02
	I0420 00:17:19.196637   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.196647   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.196654   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.199704   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:19.200922   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:17:19.200947   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.200958   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.200964   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.206449   94171 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0420 00:17:19.207537   94171 pod_ready.go:92] pod "etcd-ha-371738-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:19.207556   94171 pod_ready.go:81] duration metric: took 10.986108ms for pod "etcd-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.207567   94171 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-371738-m03" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:19.349944   94171 request.go:629] Waited for 142.27904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:19.350026   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:19.350034   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.350045   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.350052   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.353760   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:19.549955   94171 request.go:629] Waited for 195.385232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:19.550011   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:19.550016   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.550024   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.550031   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.553053   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:19.750003   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:19.750030   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.750042   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.750047   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.754300   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:19.950124   94171 request.go:629] Waited for 194.356929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:19.950198   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:19.950205   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:19.950215   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:19.950222   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:19.954126   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:20.207997   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:20.208019   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:20.208027   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:20.208032   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:20.211245   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:20.350148   94171 request.go:629] Waited for 138.082811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:20.350235   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:20.350247   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:20.350255   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:20.350262   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:20.354049   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:20.708688   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:20.708713   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:20.708721   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:20.708727   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:20.715110   94171 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0420 00:17:20.750373   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:20.750397   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:20.750410   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:20.750416   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:20.753967   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:21.208038   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:21.208068   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:21.208105   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:21.208135   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:21.211721   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:21.212964   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:21.212979   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:21.212983   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:21.212986   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:21.215881   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:21.216521   94171 pod_ready.go:102] pod "etcd-ha-371738-m03" in "kube-system" namespace has status "Ready":"False"
	I0420 00:17:21.708713   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:21.708733   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:21.708740   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:21.708744   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:21.712073   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:21.713441   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:21.713464   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:21.713472   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:21.713480   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:21.719392   94171 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0420 00:17:22.208172   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:22.208193   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:22.208201   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:22.208204   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:22.211830   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:22.212735   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:22.212753   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:22.212762   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:22.212766   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:22.215999   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:22.708600   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:22.708634   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:22.708652   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:22.708659   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:22.713963   94171 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0420 00:17:22.715790   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:22.715811   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:22.715821   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:22.715825   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:22.721370   94171 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0420 00:17:23.207970   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:23.207991   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:23.207999   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:23.208004   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:23.211671   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:23.212938   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:23.212954   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:23.212962   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:23.212965   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:23.216279   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:23.217044   94171 pod_ready.go:102] pod "etcd-ha-371738-m03" in "kube-system" namespace has status "Ready":"False"
	I0420 00:17:23.708098   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:23.708121   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:23.708129   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:23.708134   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:23.711492   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:23.712329   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:23.712349   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:23.712356   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:23.712361   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:23.715089   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:24.208450   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:24.208474   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:24.208482   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:24.208486   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:24.211820   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:24.212843   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:24.212864   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:24.212875   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:24.212883   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:24.216198   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:24.708417   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:24.708439   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:24.708446   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:24.708451   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:24.712001   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:24.712698   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:24.712716   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:24.712723   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:24.712729   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:24.716272   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:25.208150   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:25.208173   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:25.208181   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:25.208185   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:25.211935   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:25.213021   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:25.213038   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:25.213045   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:25.213049   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:25.216298   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:25.708629   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:25.708656   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:25.708665   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:25.708670   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:25.712736   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:25.713787   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:25.713805   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:25.713814   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:25.713821   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:25.717454   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:25.718249   94171 pod_ready.go:102] pod "etcd-ha-371738-m03" in "kube-system" namespace has status "Ready":"False"
	I0420 00:17:26.208667   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:26.208695   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:26.208709   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:26.208716   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:26.212163   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:26.213042   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:26.213057   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:26.213064   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:26.213071   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:26.216194   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:26.707968   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:26.707991   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:26.707999   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:26.708004   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:26.711795   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:26.712642   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:26.712657   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:26.712664   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:26.712669   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:26.715623   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:27.208208   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:27.208231   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:27.208240   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:27.208245   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:27.211874   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:27.212747   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:27.212762   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:27.212768   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:27.212773   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:27.215496   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:27.707826   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:27.707847   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:27.707854   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:27.707858   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:27.711608   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:27.712623   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:27.712652   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:27.712660   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:27.712664   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:27.715600   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:28.208585   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:28.208606   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:28.208613   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:28.208617   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:28.212157   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:28.213175   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:28.213190   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:28.213197   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:28.213212   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:28.216271   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:28.216968   94171 pod_ready.go:102] pod "etcd-ha-371738-m03" in "kube-system" namespace has status "Ready":"False"
	I0420 00:17:28.708114   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:28.708143   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:28.708152   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:28.708156   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:28.711583   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:28.712737   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:28.712753   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:28.712762   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:28.712766   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:28.715715   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:29.208753   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:29.208789   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.208798   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.208803   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.212154   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:29.213356   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:29.213372   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.213379   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.213383   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.216508   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:29.707841   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-371738-m03
	I0420 00:17:29.707868   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.707879   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.707886   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.711675   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:29.712601   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:29.712620   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.712629   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.712635   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.716002   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:29.716789   94171 pod_ready.go:92] pod "etcd-ha-371738-m03" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:29.716808   94171 pod_ready.go:81] duration metric: took 10.509234817s for pod "etcd-ha-371738-m03" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.716830   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.716895   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738
	I0420 00:17:29.716905   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.716915   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.716920   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.719594   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:29.720495   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:29.720511   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.720517   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.720521   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.723175   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:29.723841   94171 pod_ready.go:92] pod "kube-apiserver-ha-371738" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:29.723864   94171 pod_ready.go:81] duration metric: took 7.024745ms for pod "kube-apiserver-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.723876   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.723940   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738-m02
	I0420 00:17:29.723952   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.723960   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.723967   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.726704   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:29.727342   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:17:29.727362   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.727373   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.727378   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.729785   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:29.730332   94171 pod_ready.go:92] pod "kube-apiserver-ha-371738-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:29.730352   94171 pod_ready.go:81] duration metric: took 6.468527ms for pod "kube-apiserver-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.730362   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-371738-m03" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.730425   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-371738-m03
	I0420 00:17:29.730436   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.730446   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.730451   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.733047   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:29.733781   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:29.733801   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.733811   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.733818   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.736687   94171 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 00:17:29.737846   94171 pod_ready.go:92] pod "kube-apiserver-ha-371738-m03" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:29.737867   94171 pod_ready.go:81] duration metric: took 7.496633ms for pod "kube-apiserver-ha-371738-m03" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.737879   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.737936   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-371738
	I0420 00:17:29.737947   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.737957   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.737964   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.741179   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:29.741855   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:29.741873   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.741884   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.741893   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.749571   94171 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0420 00:17:29.750097   94171 pod_ready.go:92] pod "kube-controller-manager-ha-371738" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:29.750121   94171 pod_ready.go:81] duration metric: took 12.234318ms for pod "kube-controller-manager-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.750133   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:29.908463   94171 request.go:629] Waited for 158.24934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-371738-m02
	I0420 00:17:29.908528   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-371738-m02
	I0420 00:17:29.908533   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:29.908541   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:29.908545   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:29.912055   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:30.108341   94171 request.go:629] Waited for 195.364227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:17:30.108405   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:17:30.108411   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:30.108422   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:30.108437   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:30.112635   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:30.113203   94171 pod_ready.go:92] pod "kube-controller-manager-ha-371738-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:30.113221   94171 pod_ready.go:81] duration metric: took 363.080361ms for pod "kube-controller-manager-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:30.113231   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-371738-m03" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:30.308591   94171 request.go:629] Waited for 195.287776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-371738-m03
	I0420 00:17:30.308657   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-371738-m03
	I0420 00:17:30.308662   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:30.308671   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:30.308678   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:30.312580   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:30.508194   94171 request.go:629] Waited for 194.465635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:30.508271   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:30.508282   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:30.508293   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:30.508306   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:30.511919   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:30.512809   94171 pod_ready.go:92] pod "kube-controller-manager-ha-371738-m03" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:30.512828   94171 pod_ready.go:81] duration metric: took 399.588508ms for pod "kube-controller-manager-ha-371738-m03" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:30.512838   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-59wls" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:30.707874   94171 request.go:629] Waited for 194.956694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-59wls
	I0420 00:17:30.707942   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-59wls
	I0420 00:17:30.707948   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:30.707957   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:30.707963   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:30.712104   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:30.908591   94171 request.go:629] Waited for 195.384985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:17:30.908692   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:17:30.908706   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:30.908719   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:30.908725   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:30.913248   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:30.914095   94171 pod_ready.go:92] pod "kube-proxy-59wls" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:30.914119   94171 pod_ready.go:81] duration metric: took 401.273767ms for pod "kube-proxy-59wls" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:30.914133   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-924z9" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:31.108117   94171 request.go:629] Waited for 193.908699ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-924z9
	I0420 00:17:31.108188   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-924z9
	I0420 00:17:31.108194   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:31.108202   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:31.108206   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:31.112357   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:31.308918   94171 request.go:629] Waited for 195.365354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:31.308977   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:31.308991   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:31.309002   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:31.309010   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:31.312910   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:31.313955   94171 pod_ready.go:92] pod "kube-proxy-924z9" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:31.313973   94171 pod_ready.go:81] duration metric: took 399.833418ms for pod "kube-proxy-924z9" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:31.313982   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zw62l" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:31.508828   94171 request.go:629] Waited for 194.78105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw62l
	I0420 00:17:31.508938   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw62l
	I0420 00:17:31.508956   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:31.508965   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:31.508969   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:31.512828   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:31.708208   94171 request.go:629] Waited for 194.380563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:31.708298   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:31.708306   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:31.708320   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:31.708331   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:31.711388   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:31.712271   94171 pod_ready.go:92] pod "kube-proxy-zw62l" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:31.712288   94171 pod_ready.go:81] duration metric: took 398.299702ms for pod "kube-proxy-zw62l" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:31.712298   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:31.908385   94171 request.go:629] Waited for 196.005489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738
	I0420 00:17:31.908457   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738
	I0420 00:17:31.908464   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:31.908472   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:31.908480   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:31.912558   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:32.108639   94171 request.go:629] Waited for 195.372084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:32.108739   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738
	I0420 00:17:32.108753   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:32.108761   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:32.108767   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:32.113520   94171 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 00:17:32.114965   94171 pod_ready.go:92] pod "kube-scheduler-ha-371738" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:32.114989   94171 pod_ready.go:81] duration metric: took 402.683186ms for pod "kube-scheduler-ha-371738" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:32.115002   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:32.308133   94171 request.go:629] Waited for 193.010716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738-m02
	I0420 00:17:32.308189   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738-m02
	I0420 00:17:32.308194   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:32.308204   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:32.308215   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:32.311763   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:32.508893   94171 request.go:629] Waited for 196.361088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:17:32.508966   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m02
	I0420 00:17:32.508977   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:32.508986   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:32.508992   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:32.512382   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:32.512980   94171 pod_ready.go:92] pod "kube-scheduler-ha-371738-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:32.512998   94171 pod_ready.go:81] duration metric: took 397.989059ms for pod "kube-scheduler-ha-371738-m02" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:32.513007   94171 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-371738-m03" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:32.708181   94171 request.go:629] Waited for 195.082136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738-m03
	I0420 00:17:32.708242   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738-m03
	I0420 00:17:32.708247   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:32.708254   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:32.708259   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:32.712019   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:32.908239   94171 request.go:629] Waited for 195.354874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:32.908328   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-371738-m03
	I0420 00:17:32.908341   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:32.908351   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:32.908359   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:32.911774   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:32.912491   94171 pod_ready.go:92] pod "kube-scheduler-ha-371738-m03" in "kube-system" namespace has status "Ready":"True"
	I0420 00:17:32.912513   94171 pod_ready.go:81] duration metric: took 399.498356ms for pod "kube-scheduler-ha-371738-m03" in "kube-system" namespace to be "Ready" ...
	I0420 00:17:32.912528   94171 pod_ready.go:38] duration metric: took 13.758290828s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 00:17:32.912549   94171 api_server.go:52] waiting for apiserver process to appear ...
	I0420 00:17:32.912615   94171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:17:32.931149   94171 api_server.go:72] duration metric: took 18.135251217s to wait for apiserver process to appear ...
	I0420 00:17:32.931170   94171 api_server.go:88] waiting for apiserver healthz status ...
	I0420 00:17:32.931190   94171 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0420 00:17:32.937852   94171 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0420 00:17:32.937924   94171 round_trippers.go:463] GET https://192.168.39.217:8443/version
	I0420 00:17:32.937937   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:32.937945   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:32.937949   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:32.938889   94171 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0420 00:17:32.938961   94171 api_server.go:141] control plane version: v1.30.0
	I0420 00:17:32.938980   94171 api_server.go:131] duration metric: took 7.802392ms to wait for apiserver health ...
	I0420 00:17:32.938994   94171 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 00:17:33.108421   94171 request.go:629] Waited for 169.340457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:17:33.108480   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:17:33.108485   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:33.108493   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:33.108498   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:33.115314   94171 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0420 00:17:33.121462   94171 system_pods.go:59] 24 kube-system pods found
	I0420 00:17:33.121487   94171 system_pods.go:61] "coredns-7db6d8ff4d-9hc82" [279d40d8-eb21-476c-ba36-bc7592777126] Running
	I0420 00:17:33.121491   94171 system_pods.go:61] "coredns-7db6d8ff4d-jvvpr" [104d5328-1f6a-4747-8e26-9a98e38dc1cc] Running
	I0420 00:17:33.121494   94171 system_pods.go:61] "etcd-ha-371738" [5e23c4a0-7c15-47b9-b722-82e61a10f286] Running
	I0420 00:17:33.121498   94171 system_pods.go:61] "etcd-ha-371738-m02" [712e8a6e-7007-4cf1-8a0c-4e33eeccebcd] Running
	I0420 00:17:33.121500   94171 system_pods.go:61] "etcd-ha-371738-m03" [089407cc-414e-479f-8522-f19327068d36] Running
	I0420 00:17:33.121505   94171 system_pods.go:61] "kindnet-ggw7f" [2e0d1c1a-6fb4-4c3e-ae2b-41cfccaba2dd] Running
	I0420 00:17:33.121508   94171 system_pods.go:61] "kindnet-ph4sb" [d0786a22-e08e-4924-93b1-d8f3f34c9da7] Running
	I0420 00:17:33.121510   94171 system_pods.go:61] "kindnet-s87k2" [0820561f-f794-4ac5-8ce2-ae0cb4310c3e] Running
	I0420 00:17:33.121514   94171 system_pods.go:61] "kube-apiserver-ha-371738" [301ce02b-37b1-42ba-8a45-fbde327e2a02] Running
	I0420 00:17:33.121517   94171 system_pods.go:61] "kube-apiserver-ha-371738-m02" [a22f017a-e7b0-4748-9486-b52d35284584] Running
	I0420 00:17:33.121520   94171 system_pods.go:61] "kube-apiserver-ha-371738-m03" [5a627f3c-199a-4a3f-9940-2e7e1d73321d] Running
	I0420 00:17:33.121524   94171 system_pods.go:61] "kube-controller-manager-ha-371738" [bc03ed79-b024-46b1-af13-45a3def8bcae] Running
	I0420 00:17:33.121527   94171 system_pods.go:61] "kube-controller-manager-ha-371738-m02" [7b460bfb-bddf-46c0-a30c-f5e9757a32ad] Running
	I0420 00:17:33.121531   94171 system_pods.go:61] "kube-controller-manager-ha-371738-m03" [2f7bc375-ad5a-4ff1-93b4-3166d4b92c35] Running
	I0420 00:17:33.121535   94171 system_pods.go:61] "kube-proxy-59wls" [722c6b7d-109b-4201-a5f1-c02a65befcf2] Running
	I0420 00:17:33.121538   94171 system_pods.go:61] "kube-proxy-924z9" [87034485-00d8-4a57-949d-2e894dd08ce4] Running
	I0420 00:17:33.121541   94171 system_pods.go:61] "kube-proxy-zw62l" [dad72bfc-65c2-4007-9d5c-682ddf48c44d] Running
	I0420 00:17:33.121544   94171 system_pods.go:61] "kube-scheduler-ha-371738" [a3df56d3-c437-4ea9-b73d-2b22e93334b3] Running
	I0420 00:17:33.121547   94171 system_pods.go:61] "kube-scheduler-ha-371738-m02" [47dba6e4-cb4d-43e8-a173-06d13b08fd55] Running
	I0420 00:17:33.121553   94171 system_pods.go:61] "kube-scheduler-ha-371738-m03" [35e43bbb-1e3f-44cf-846b-3b1bcd08a468] Running
	I0420 00:17:33.121558   94171 system_pods.go:61] "kube-vip-ha-371738" [8d162382-25bb-4393-8c45-a8487b571605] Running
	I0420 00:17:33.121564   94171 system_pods.go:61] "kube-vip-ha-371738-m02" [76331738-5bca-4724-939e-4c16a906e65b] Running
	I0420 00:17:33.121572   94171 system_pods.go:61] "kube-vip-ha-371738-m03" [c09364c4-d879-49fb-a719-e9c06301a4bc] Running
	I0420 00:17:33.121577   94171 system_pods.go:61] "storage-provisioner" [1d7b89d3-7cff-4258-8215-819971fa1b81] Running
	I0420 00:17:33.121585   94171 system_pods.go:74] duration metric: took 182.580911ms to wait for pod list to return data ...
	I0420 00:17:33.121595   94171 default_sa.go:34] waiting for default service account to be created ...
	I0420 00:17:33.308803   94171 request.go:629] Waited for 187.118905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0420 00:17:33.308874   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0420 00:17:33.308880   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:33.308888   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:33.308892   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:33.312432   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:33.312690   94171 default_sa.go:45] found service account: "default"
	I0420 00:17:33.312710   94171 default_sa.go:55] duration metric: took 191.106105ms for default service account to be created ...
	I0420 00:17:33.312717   94171 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 00:17:33.508470   94171 request.go:629] Waited for 195.677884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:17:33.508532   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0420 00:17:33.508537   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:33.508545   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:33.508555   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:33.518190   94171 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0420 00:17:33.524807   94171 system_pods.go:86] 24 kube-system pods found
	I0420 00:17:33.524833   94171 system_pods.go:89] "coredns-7db6d8ff4d-9hc82" [279d40d8-eb21-476c-ba36-bc7592777126] Running
	I0420 00:17:33.524840   94171 system_pods.go:89] "coredns-7db6d8ff4d-jvvpr" [104d5328-1f6a-4747-8e26-9a98e38dc1cc] Running
	I0420 00:17:33.524845   94171 system_pods.go:89] "etcd-ha-371738" [5e23c4a0-7c15-47b9-b722-82e61a10f286] Running
	I0420 00:17:33.524849   94171 system_pods.go:89] "etcd-ha-371738-m02" [712e8a6e-7007-4cf1-8a0c-4e33eeccebcd] Running
	I0420 00:17:33.524853   94171 system_pods.go:89] "etcd-ha-371738-m03" [089407cc-414e-479f-8522-f19327068d36] Running
	I0420 00:17:33.524856   94171 system_pods.go:89] "kindnet-ggw7f" [2e0d1c1a-6fb4-4c3e-ae2b-41cfccaba2dd] Running
	I0420 00:17:33.524861   94171 system_pods.go:89] "kindnet-ph4sb" [d0786a22-e08e-4924-93b1-d8f3f34c9da7] Running
	I0420 00:17:33.524864   94171 system_pods.go:89] "kindnet-s87k2" [0820561f-f794-4ac5-8ce2-ae0cb4310c3e] Running
	I0420 00:17:33.524869   94171 system_pods.go:89] "kube-apiserver-ha-371738" [301ce02b-37b1-42ba-8a45-fbde327e2a02] Running
	I0420 00:17:33.524873   94171 system_pods.go:89] "kube-apiserver-ha-371738-m02" [a22f017a-e7b0-4748-9486-b52d35284584] Running
	I0420 00:17:33.524878   94171 system_pods.go:89] "kube-apiserver-ha-371738-m03" [5a627f3c-199a-4a3f-9940-2e7e1d73321d] Running
	I0420 00:17:33.524885   94171 system_pods.go:89] "kube-controller-manager-ha-371738" [bc03ed79-b024-46b1-af13-45a3def8bcae] Running
	I0420 00:17:33.524890   94171 system_pods.go:89] "kube-controller-manager-ha-371738-m02" [7b460bfb-bddf-46c0-a30c-f5e9757a32ad] Running
	I0420 00:17:33.524896   94171 system_pods.go:89] "kube-controller-manager-ha-371738-m03" [2f7bc375-ad5a-4ff1-93b4-3166d4b92c35] Running
	I0420 00:17:33.524901   94171 system_pods.go:89] "kube-proxy-59wls" [722c6b7d-109b-4201-a5f1-c02a65befcf2] Running
	I0420 00:17:33.524910   94171 system_pods.go:89] "kube-proxy-924z9" [87034485-00d8-4a57-949d-2e894dd08ce4] Running
	I0420 00:17:33.524917   94171 system_pods.go:89] "kube-proxy-zw62l" [dad72bfc-65c2-4007-9d5c-682ddf48c44d] Running
	I0420 00:17:33.524921   94171 system_pods.go:89] "kube-scheduler-ha-371738" [a3df56d3-c437-4ea9-b73d-2b22e93334b3] Running
	I0420 00:17:33.524927   94171 system_pods.go:89] "kube-scheduler-ha-371738-m02" [47dba6e4-cb4d-43e8-a173-06d13b08fd55] Running
	I0420 00:17:33.524932   94171 system_pods.go:89] "kube-scheduler-ha-371738-m03" [35e43bbb-1e3f-44cf-846b-3b1bcd08a468] Running
	I0420 00:17:33.524938   94171 system_pods.go:89] "kube-vip-ha-371738" [8d162382-25bb-4393-8c45-a8487b571605] Running
	I0420 00:17:33.524942   94171 system_pods.go:89] "kube-vip-ha-371738-m02" [76331738-5bca-4724-939e-4c16a906e65b] Running
	I0420 00:17:33.524948   94171 system_pods.go:89] "kube-vip-ha-371738-m03" [c09364c4-d879-49fb-a719-e9c06301a4bc] Running
	I0420 00:17:33.524951   94171 system_pods.go:89] "storage-provisioner" [1d7b89d3-7cff-4258-8215-819971fa1b81] Running
	I0420 00:17:33.524961   94171 system_pods.go:126] duration metric: took 212.238163ms to wait for k8s-apps to be running ...
	I0420 00:17:33.524969   94171 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 00:17:33.525015   94171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:17:33.544738   94171 system_svc.go:56] duration metric: took 19.760068ms WaitForService to wait for kubelet
	I0420 00:17:33.544768   94171 kubeadm.go:576] duration metric: took 18.748916318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:17:33.544790   94171 node_conditions.go:102] verifying NodePressure condition ...
	I0420 00:17:33.708442   94171 request.go:629] Waited for 163.564735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
	I0420 00:17:33.708536   94171 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0420 00:17:33.708549   94171 round_trippers.go:469] Request Headers:
	I0420 00:17:33.708559   94171 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0420 00:17:33.708565   94171 round_trippers.go:473]     Accept: application/json, */*
	I0420 00:17:33.712292   94171 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 00:17:33.713845   94171 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 00:17:33.713869   94171 node_conditions.go:123] node cpu capacity is 2
	I0420 00:17:33.713881   94171 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 00:17:33.713884   94171 node_conditions.go:123] node cpu capacity is 2
	I0420 00:17:33.713887   94171 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 00:17:33.713891   94171 node_conditions.go:123] node cpu capacity is 2
	I0420 00:17:33.713895   94171 node_conditions.go:105] duration metric: took 169.098844ms to run NodePressure ...
	I0420 00:17:33.713907   94171 start.go:240] waiting for startup goroutines ...
	I0420 00:17:33.713931   94171 start.go:254] writing updated cluster config ...
	I0420 00:17:33.714201   94171 ssh_runner.go:195] Run: rm -f paused
	I0420 00:17:33.766160   94171 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 00:17:33.768271   94171 out.go:177] * Done! kubectl is now configured to use "ha-371738" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.354877700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713572524354856598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0c722c0-86b3-41aa-a245-3671d661427e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.355418730Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f28eeb4f-940d-4b1e-acf4-195dffef5b55 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.355468499Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f28eeb4f-940d-4b1e-acf4-195dffef5b55 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.355720786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee362bb57c39b48e30473ee01be65a12508f89000c04664e9d4cb00eead48881,PodSandboxId:2952502d79ed7046fb6c936e2cdcaac06d274a1af6bb0f72625bb9c7849a53af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713572256398169441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91975a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf,PodSandboxId:96b6f46faf7987503503c406f518a352cf828470aaa2857fdc4e9580eee7d3ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572112401733564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c,PodSandboxId:6951735c94141fbea313e44ff72fab10529f03b1ba6dc664543c35ed8b0e7c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572112310336552,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0f5c9dcace63e2dc86b034dc66a0a660764b45a0999a972dea4c7c8cd62d11e,PodSandboxId:01cb2806eed5909650fa3a5bbb88b004584ddd9d24eee13df6af3949638dac25,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1713572110734518387,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13bd67903cc5e0f74278eaddc236e4597d725fc89a163319ccc5ffa57716c6b,PodSandboxId:e0cd9f38c95d64e8716ff5be77be15b480d34445a3a35c3e35d5cc2bb3e044a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17135721
08941915488,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernetes.container.hash: dd367de8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c,PodSandboxId:78d8eb3f68b710cf8ae3ebc45873b48e07019b5e4d7efd0b56e62a4513be110c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713572108700737032,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245ebfdbdadb6145c9104ae5b268ed54335723a4402d44a9f283dca41c61dbf2,PodSandboxId:e3275bdf3889ebe8780f0f686229b8a81cda9dd7ac84f9a1b3e19cf39eab89b1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713572088929801835,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f78293c6a4434108e95d95ceaf01fb5d,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0,PodSandboxId:6b52fa6b93c1b7e8f8537088635da6d0cb7b5bb9091002379c8f7b848af01e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713572087052250691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f163f250149afb625b34cc67c2a85b657a6c38717a194973b7406caf8b71afdb,PodSandboxId:955da581ce36468153b6418af5b2fbdf608b8744b4c56479853fdcd91e690225,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713572087016669323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9,PodSandboxId:6c0d855406f87897ca0924505087fcfdf3cb0d5eaf2fcde6c237b42f6d3ffd82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713572086953744333,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cd3108e73ec5bd7f90cb4fd3f619ba5cc28c85b3d9801577acddf5ec223370,PodSandboxId:19081241153ea0333e203fc33b13da47c76fa5bce9ccea62ac30f45b1c588e03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713572086899239600,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotations:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f28eeb4f-940d-4b1e-acf4-195dffef5b55 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.397698458Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0deb6e09-5842-4ef9-9639-73504e57164b name=/runtime.v1.RuntimeService/Version
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.397764385Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0deb6e09-5842-4ef9-9639-73504e57164b name=/runtime.v1.RuntimeService/Version
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.398772620Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a354a0c5-c9f3-4d2d-b710-e87234b2f40b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.399301723Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713572524399278038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a354a0c5-c9f3-4d2d-b710-e87234b2f40b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.399905364Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fa74716-02fd-4558-8dc6-5073bc0abea8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.399994446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fa74716-02fd-4558-8dc6-5073bc0abea8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.400320614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee362bb57c39b48e30473ee01be65a12508f89000c04664e9d4cb00eead48881,PodSandboxId:2952502d79ed7046fb6c936e2cdcaac06d274a1af6bb0f72625bb9c7849a53af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713572256398169441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91975a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf,PodSandboxId:96b6f46faf7987503503c406f518a352cf828470aaa2857fdc4e9580eee7d3ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572112401733564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c,PodSandboxId:6951735c94141fbea313e44ff72fab10529f03b1ba6dc664543c35ed8b0e7c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572112310336552,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0f5c9dcace63e2dc86b034dc66a0a660764b45a0999a972dea4c7c8cd62d11e,PodSandboxId:01cb2806eed5909650fa3a5bbb88b004584ddd9d24eee13df6af3949638dac25,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1713572110734518387,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13bd67903cc5e0f74278eaddc236e4597d725fc89a163319ccc5ffa57716c6b,PodSandboxId:e0cd9f38c95d64e8716ff5be77be15b480d34445a3a35c3e35d5cc2bb3e044a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17135721
08941915488,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernetes.container.hash: dd367de8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c,PodSandboxId:78d8eb3f68b710cf8ae3ebc45873b48e07019b5e4d7efd0b56e62a4513be110c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713572108700737032,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245ebfdbdadb6145c9104ae5b268ed54335723a4402d44a9f283dca41c61dbf2,PodSandboxId:e3275bdf3889ebe8780f0f686229b8a81cda9dd7ac84f9a1b3e19cf39eab89b1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713572088929801835,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f78293c6a4434108e95d95ceaf01fb5d,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0,PodSandboxId:6b52fa6b93c1b7e8f8537088635da6d0cb7b5bb9091002379c8f7b848af01e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713572087052250691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f163f250149afb625b34cc67c2a85b657a6c38717a194973b7406caf8b71afdb,PodSandboxId:955da581ce36468153b6418af5b2fbdf608b8744b4c56479853fdcd91e690225,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713572087016669323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9,PodSandboxId:6c0d855406f87897ca0924505087fcfdf3cb0d5eaf2fcde6c237b42f6d3ffd82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713572086953744333,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cd3108e73ec5bd7f90cb4fd3f619ba5cc28c85b3d9801577acddf5ec223370,PodSandboxId:19081241153ea0333e203fc33b13da47c76fa5bce9ccea62ac30f45b1c588e03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713572086899239600,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotations:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fa74716-02fd-4558-8dc6-5073bc0abea8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.440235631Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4313bf8-5d84-4ed2-8d82-22043125735f name=/runtime.v1.RuntimeService/Version
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.440362652Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4313bf8-5d84-4ed2-8d82-22043125735f name=/runtime.v1.RuntimeService/Version
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.441368555Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23f614d0-6d54-4766-8722-a53e8cfdbea1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.441795837Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713572524441771560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23f614d0-6d54-4766-8722-a53e8cfdbea1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.442509954Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99cdf893-2b0c-4ee2-83a5-fc0b10322266 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.442640791Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99cdf893-2b0c-4ee2-83a5-fc0b10322266 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.442873485Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee362bb57c39b48e30473ee01be65a12508f89000c04664e9d4cb00eead48881,PodSandboxId:2952502d79ed7046fb6c936e2cdcaac06d274a1af6bb0f72625bb9c7849a53af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713572256398169441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91975a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf,PodSandboxId:96b6f46faf7987503503c406f518a352cf828470aaa2857fdc4e9580eee7d3ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572112401733564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c,PodSandboxId:6951735c94141fbea313e44ff72fab10529f03b1ba6dc664543c35ed8b0e7c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572112310336552,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0f5c9dcace63e2dc86b034dc66a0a660764b45a0999a972dea4c7c8cd62d11e,PodSandboxId:01cb2806eed5909650fa3a5bbb88b004584ddd9d24eee13df6af3949638dac25,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1713572110734518387,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13bd67903cc5e0f74278eaddc236e4597d725fc89a163319ccc5ffa57716c6b,PodSandboxId:e0cd9f38c95d64e8716ff5be77be15b480d34445a3a35c3e35d5cc2bb3e044a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17135721
08941915488,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernetes.container.hash: dd367de8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c,PodSandboxId:78d8eb3f68b710cf8ae3ebc45873b48e07019b5e4d7efd0b56e62a4513be110c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713572108700737032,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245ebfdbdadb6145c9104ae5b268ed54335723a4402d44a9f283dca41c61dbf2,PodSandboxId:e3275bdf3889ebe8780f0f686229b8a81cda9dd7ac84f9a1b3e19cf39eab89b1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713572088929801835,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f78293c6a4434108e95d95ceaf01fb5d,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0,PodSandboxId:6b52fa6b93c1b7e8f8537088635da6d0cb7b5bb9091002379c8f7b848af01e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713572087052250691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f163f250149afb625b34cc67c2a85b657a6c38717a194973b7406caf8b71afdb,PodSandboxId:955da581ce36468153b6418af5b2fbdf608b8744b4c56479853fdcd91e690225,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713572087016669323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9,PodSandboxId:6c0d855406f87897ca0924505087fcfdf3cb0d5eaf2fcde6c237b42f6d3ffd82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713572086953744333,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cd3108e73ec5bd7f90cb4fd3f619ba5cc28c85b3d9801577acddf5ec223370,PodSandboxId:19081241153ea0333e203fc33b13da47c76fa5bce9ccea62ac30f45b1c588e03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713572086899239600,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotations:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99cdf893-2b0c-4ee2-83a5-fc0b10322266 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.481277606Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11496a07-e539-42c1-a722-baf7d2898f87 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.481372133Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11496a07-e539-42c1-a722-baf7d2898f87 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.482913054Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c8055e2-4b67-47eb-88ff-87eae6c2c5a3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.483424413Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713572524483399538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c8055e2-4b67-47eb-88ff-87eae6c2c5a3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.484210975Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=daea4ff3-2691-4b74-916a-743ceaf61267 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.484293298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=daea4ff3-2691-4b74-916a-743ceaf61267 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:22:04 ha-371738 crio[682]: time="2024-04-20 00:22:04.484531339Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee362bb57c39b48e30473ee01be65a12508f89000c04664e9d4cb00eead48881,PodSandboxId:2952502d79ed7046fb6c936e2cdcaac06d274a1af6bb0f72625bb9c7849a53af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713572256398169441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91975a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf,PodSandboxId:96b6f46faf7987503503c406f518a352cf828470aaa2857fdc4e9580eee7d3ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572112401733564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c,PodSandboxId:6951735c94141fbea313e44ff72fab10529f03b1ba6dc664543c35ed8b0e7c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572112310336552,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0f5c9dcace63e2dc86b034dc66a0a660764b45a0999a972dea4c7c8cd62d11e,PodSandboxId:01cb2806eed5909650fa3a5bbb88b004584ddd9d24eee13df6af3949638dac25,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1713572110734518387,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13bd67903cc5e0f74278eaddc236e4597d725fc89a163319ccc5ffa57716c6b,PodSandboxId:e0cd9f38c95d64e8716ff5be77be15b480d34445a3a35c3e35d5cc2bb3e044a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17135721
08941915488,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernetes.container.hash: dd367de8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c,PodSandboxId:78d8eb3f68b710cf8ae3ebc45873b48e07019b5e4d7efd0b56e62a4513be110c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713572108700737032,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245ebfdbdadb6145c9104ae5b268ed54335723a4402d44a9f283dca41c61dbf2,PodSandboxId:e3275bdf3889ebe8780f0f686229b8a81cda9dd7ac84f9a1b3e19cf39eab89b1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713572088929801835,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f78293c6a4434108e95d95ceaf01fb5d,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0,PodSandboxId:6b52fa6b93c1b7e8f8537088635da6d0cb7b5bb9091002379c8f7b848af01e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713572087052250691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f163f250149afb625b34cc67c2a85b657a6c38717a194973b7406caf8b71afdb,PodSandboxId:955da581ce36468153b6418af5b2fbdf608b8744b4c56479853fdcd91e690225,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713572087016669323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9,PodSandboxId:6c0d855406f87897ca0924505087fcfdf3cb0d5eaf2fcde6c237b42f6d3ffd82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713572086953744333,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cd3108e73ec5bd7f90cb4fd3f619ba5cc28c85b3d9801577acddf5ec223370,PodSandboxId:19081241153ea0333e203fc33b13da47c76fa5bce9ccea62ac30f45b1c588e03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713572086899239600,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotations:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=daea4ff3-2691-4b74-916a-743ceaf61267 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ee362bb57c39b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   2952502d79ed7       busybox-fc5497c4f-f8cxz
	0895fff8b18b0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   96b6f46faf798       coredns-7db6d8ff4d-9hc82
	a8223d8428849       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   6951735c94141       coredns-7db6d8ff4d-jvvpr
	c0f5c9dcace63       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   01cb2806eed59       storage-provisioner
	b13bd67903cc5       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   e0cd9f38c95d6       kindnet-s87k2
	484faebf3e657       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      6 minutes ago       Running             kube-proxy                0                   78d8eb3f68b71       kube-proxy-zw62l
	245ebfdbdadb6       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   e3275bdf3889e       kube-vip-ha-371738
	c7bfd34cee24c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   6b52fa6b93c1b       etcd-ha-371738
	f163f250149af       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago       Running             kube-controller-manager   0                   955da581ce364       kube-controller-manager-ha-371738
	c9112b9048168       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago       Running             kube-scheduler            0                   6c0d855406f87       kube-scheduler-ha-371738
	c0cd3108e73ec       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago       Running             kube-apiserver            0                   19081241153ea       kube-apiserver-ha-371738
	
	
	==> coredns [0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf] <==
	[INFO] 10.244.2.2:50506 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205402s
	[INFO] 10.244.2.2:56719 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000233711s
	[INFO] 10.244.2.2:56750 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00021898s
	[INFO] 10.244.2.2:53438 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111524s
	[INFO] 10.244.0.4:40741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104546s
	[INFO] 10.244.0.4:60826 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142804s
	[INFO] 10.244.1.2:55654 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142714s
	[INFO] 10.244.1.2:34889 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117318s
	[INFO] 10.244.1.2:45674 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142204s
	[INFO] 10.244.1.2:43577 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088578s
	[INFO] 10.244.1.2:36740 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123852s
	[INFO] 10.244.1.2:57454 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168492s
	[INFO] 10.244.2.2:49398 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205465s
	[INFO] 10.244.2.2:48930 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221231s
	[INFO] 10.244.2.2:42052 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108139s
	[INFO] 10.244.0.4:40360 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000213257s
	[INFO] 10.244.0.4:54447 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081534s
	[INFO] 10.244.1.2:40715 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185061s
	[INFO] 10.244.1.2:45537 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165941s
	[INFO] 10.244.1.2:38158 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132179s
	[INFO] 10.244.2.2:42970 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000371127s
	[INFO] 10.244.2.2:50230 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000172364s
	[INFO] 10.244.0.4:51459 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000058901s
	[INFO] 10.244.0.4:59988 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131476s
	[INFO] 10.244.1.2:56359 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140553s
	
	
	==> coredns [a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c] <==
	[INFO] 10.244.0.4:51638 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000400782s
	[INFO] 10.244.0.4:50604 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000084188s
	[INFO] 10.244.0.4:36574 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002067722s
	[INFO] 10.244.1.2:39782 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118148s
	[INFO] 10.244.2.2:34556 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00272395s
	[INFO] 10.244.2.2:59691 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134121s
	[INFO] 10.244.0.4:54126 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001623235s
	[INFO] 10.244.0.4:42647 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000295581s
	[INFO] 10.244.0.4:47843 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001168377s
	[INFO] 10.244.0.4:59380 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132829s
	[INFO] 10.244.0.4:59464 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000032892s
	[INFO] 10.244.0.4:52319 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063642s
	[INFO] 10.244.1.2:41188 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001744808s
	[INFO] 10.244.1.2:56595 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001214481s
	[INFO] 10.244.2.2:57639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000180873s
	[INFO] 10.244.0.4:57748 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177324s
	[INFO] 10.244.0.4:49496 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076032s
	[INFO] 10.244.1.2:36655 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131976s
	[INFO] 10.244.2.2:37462 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221492s
	[INFO] 10.244.2.2:58605 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000186595s
	[INFO] 10.244.0.4:34556 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191452s
	[INFO] 10.244.0.4:53073 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000345299s
	[INFO] 10.244.1.2:38241 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000181093s
	[INFO] 10.244.1.2:59304 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000166312s
	[INFO] 10.244.1.2:50151 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139637s
	
	
	==> describe nodes <==
	Name:               ha-371738
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-371738
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-371738
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T00_14_57_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:14:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-371738
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:21:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:18:00 +0000   Sat, 20 Apr 2024 00:14:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:18:00 +0000   Sat, 20 Apr 2024 00:14:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:18:00 +0000   Sat, 20 Apr 2024 00:14:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:18:00 +0000   Sat, 20 Apr 2024 00:15:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-371738
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 74609fff13e94a48ba74bd0fc50a4818
	  System UUID:                74609fff-13e9-4a48-ba74-bd0fc50a4818
	  Boot ID:                    2adb72ca-aae0-452d-9d86-779c19923b8a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-f8cxz              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 coredns-7db6d8ff4d-9hc82             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m57s
	  kube-system                 coredns-7db6d8ff4d-jvvpr             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m57s
	  kube-system                 etcd-ha-371738                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m8s
	  kube-system                 kindnet-s87k2                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m57s
	  kube-system                 kube-apiserver-ha-371738             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 kube-controller-manager-ha-371738    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 kube-proxy-zw62l                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m57s
	  kube-system                 kube-scheduler-ha-371738             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 kube-vip-ha-371738                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m55s  kube-proxy       
	  Normal  Starting                 7m9s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m8s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m8s   kubelet          Node ha-371738 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m8s   kubelet          Node ha-371738 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m8s   kubelet          Node ha-371738 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m58s  node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	  Normal  NodeReady                6m54s  kubelet          Node ha-371738 status is now: NodeReady
	  Normal  RegisteredNode           5m47s  node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	  Normal  RegisteredNode           4m34s  node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	
	
	Name:               ha-371738-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-371738-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-371738
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_20T00_16_02_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:15:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-371738-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:18:43 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 20 Apr 2024 00:18:02 +0000   Sat, 20 Apr 2024 00:19:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 20 Apr 2024 00:18:02 +0000   Sat, 20 Apr 2024 00:19:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 20 Apr 2024 00:18:02 +0000   Sat, 20 Apr 2024 00:19:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 20 Apr 2024 00:18:02 +0000   Sat, 20 Apr 2024 00:19:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-371738-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e23e7a13fe24abd8986bea706ca80e3
	  System UUID:                4e23e7a1-3fe2-4abd-8986-bea706ca80e3
	  Boot ID:                    68a6a936-bedb-4253-bc9f-1d7fe3f3747e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-j7g5h                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-371738-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m3s
	  kube-system                 kindnet-ggw7f                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m5s
	  kube-system                 kube-apiserver-ha-371738-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-controller-manager-ha-371738-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-proxy-59wls                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-scheduler-ha-371738-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-vip-ha-371738-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  6m5s (x8 over 6m5s)  kubelet          Node ha-371738-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m5s (x8 over 6m5s)  kubelet          Node ha-371738-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s (x7 over 6m5s)  kubelet          Node ha-371738-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m3s                 node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	  Normal  RegisteredNode           5m47s                node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	  Normal  RegisteredNode           4m34s                node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	  Normal  NodeNotReady             2m39s                node-controller  Node ha-371738-m02 status is now: NodeNotReady
	
	
	Name:               ha-371738-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-371738-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-371738
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_20T00_17_14_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:17:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-371738-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:21:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:17:41 +0000   Sat, 20 Apr 2024 00:17:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:17:41 +0000   Sat, 20 Apr 2024 00:17:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:17:41 +0000   Sat, 20 Apr 2024 00:17:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:17:41 +0000   Sat, 20 Apr 2024 00:17:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-371738-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0917a4381b82461ea5ea3ad6015706e2
	  System UUID:                0917a438-1b82-461e-a5ea-3ad6015706e2
	  Boot ID:                    1e10e32a-0de9-4140-bd97-ed1fd3351685
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bqndp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-371738-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m52s
	  kube-system                 kindnet-ph4sb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m54s
	  kube-system                 kube-apiserver-ha-371738-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-controller-manager-ha-371738-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-proxy-924z9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-scheduler-ha-371738-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-vip-ha-371738-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m54s (x8 over 4m54s)  kubelet          Node ha-371738-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s (x8 over 4m54s)  kubelet          Node ha-371738-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s (x7 over 4m54s)  kubelet          Node ha-371738-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-371738-m03 event: Registered Node ha-371738-m03 in Controller
	  Normal  RegisteredNode           4m52s                  node-controller  Node ha-371738-m03 event: Registered Node ha-371738-m03 in Controller
	  Normal  RegisteredNode           4m34s                  node-controller  Node ha-371738-m03 event: Registered Node ha-371738-m03 in Controller
	
	
	Name:               ha-371738-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-371738-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-371738
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_20T00_18_15_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:18:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-371738-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:21:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:18:45 +0000   Sat, 20 Apr 2024 00:18:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:18:45 +0000   Sat, 20 Apr 2024 00:18:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:18:45 +0000   Sat, 20 Apr 2024 00:18:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:18:45 +0000   Sat, 20 Apr 2024 00:18:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    ha-371738-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 236dccfc477f4e3db2ca80077dc2160d
	  System UUID:                236dccfc-477f-4e3d-b2ca-80077dc2160d
	  Boot ID:                    7be0dc95-b84d-4dfb-9a83-50a5c6778683
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-zsn9n       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m50s
	  kube-system                 kube-proxy-7fn2b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m50s (x4 over 3m51s)  kubelet          Node ha-371738-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x4 over 3m51s)  kubelet          Node ha-371738-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x4 over 3m51s)  kubelet          Node ha-371738-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal  RegisteredNode           3m46s                  node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal  NodeReady                3m42s                  kubelet          Node ha-371738-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr20 00:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052238] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043570] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.622139] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.586855] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.718707] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.470452] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.056643] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066813] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.173842] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.129751] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.277871] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.788058] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.061136] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.194311] systemd-fstab-generator[953]: Ignoring "noauto" option for root device
	[  +1.186377] kauditd_printk_skb: 57 callbacks suppressed
	[  +8.916528] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[  +0.094324] kauditd_printk_skb: 40 callbacks suppressed
	[Apr20 00:15] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.572775] kauditd_printk_skb: 72 callbacks suppressed
	
	
	==> etcd [c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0] <==
	{"level":"warn","ts":"2024-04-20T00:22:04.739976Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.771675Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.781171Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.785287Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.803536Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.810663Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.816755Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.820318Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.824519Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.835771Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.840391Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.842336Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.848799Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.853956Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.856955Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.867611Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.873322Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.886333Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.889992Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.893297Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.900667Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.905901Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.91406Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.939272Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:22:04.955156Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:22:05 up 7 min,  0 users,  load average: 0.26, 0.28, 0.15
	Linux ha-371738 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b13bd67903cc5e0f74278eaddc236e4597d725fc89a163319ccc5ffa57716c6b] <==
	I0420 00:21:30.461334       1 main.go:250] Node ha-371738-m04 has CIDR [10.244.3.0/24] 
	I0420 00:21:40.478513       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0420 00:21:40.478674       1 main.go:227] handling current node
	I0420 00:21:40.478704       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0420 00:21:40.478802       1 main.go:250] Node ha-371738-m02 has CIDR [10.244.1.0/24] 
	I0420 00:21:40.479200       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0420 00:21:40.479406       1 main.go:250] Node ha-371738-m03 has CIDR [10.244.2.0/24] 
	I0420 00:21:40.479566       1 main.go:223] Handling node with IPs: map[192.168.39.61:{}]
	I0420 00:21:40.479667       1 main.go:250] Node ha-371738-m04 has CIDR [10.244.3.0/24] 
	I0420 00:21:50.486268       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0420 00:21:50.486370       1 main.go:227] handling current node
	I0420 00:21:50.486394       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0420 00:21:50.486411       1 main.go:250] Node ha-371738-m02 has CIDR [10.244.1.0/24] 
	I0420 00:21:50.486518       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0420 00:21:50.486538       1 main.go:250] Node ha-371738-m03 has CIDR [10.244.2.0/24] 
	I0420 00:21:50.486599       1 main.go:223] Handling node with IPs: map[192.168.39.61:{}]
	I0420 00:21:50.486617       1 main.go:250] Node ha-371738-m04 has CIDR [10.244.3.0/24] 
	I0420 00:22:00.504183       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0420 00:22:00.504558       1 main.go:227] handling current node
	I0420 00:22:00.504691       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0420 00:22:00.504804       1 main.go:250] Node ha-371738-m02 has CIDR [10.244.1.0/24] 
	I0420 00:22:00.505368       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0420 00:22:00.505495       1 main.go:250] Node ha-371738-m03 has CIDR [10.244.2.0/24] 
	I0420 00:22:00.505669       1 main.go:223] Handling node with IPs: map[192.168.39.61:{}]
	I0420 00:22:00.505699       1 main.go:250] Node ha-371738-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c0cd3108e73ec5bd7f90cb4fd3f619ba5cc28c85b3d9801577acddf5ec223370] <==
	W0420 00:14:52.056875       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217]
	I0420 00:14:52.058205       1 controller.go:615] quota admission added evaluator for: endpoints
	I0420 00:14:52.062880       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0420 00:14:52.265684       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0420 00:14:56.065924       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0420 00:14:56.135042       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0420 00:14:56.153825       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0420 00:15:07.046369       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0420 00:15:07.292206       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0420 00:17:37.427043       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41014: use of closed network connection
	E0420 00:17:37.656020       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41028: use of closed network connection
	E0420 00:17:37.874786       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41058: use of closed network connection
	E0420 00:17:38.075685       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41076: use of closed network connection
	E0420 00:17:38.280325       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41102: use of closed network connection
	E0420 00:17:38.520683       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41134: use of closed network connection
	E0420 00:17:38.751463       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41144: use of closed network connection
	E0420 00:17:38.954683       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41162: use of closed network connection
	E0420 00:17:39.164406       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41178: use of closed network connection
	E0420 00:17:39.482200       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41210: use of closed network connection
	E0420 00:17:39.686439       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41232: use of closed network connection
	E0420 00:17:39.891351       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41248: use of closed network connection
	E0420 00:17:40.091882       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41264: use of closed network connection
	E0420 00:17:40.304345       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41276: use of closed network connection
	E0420 00:17:40.511219       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41300: use of closed network connection
	W0420 00:18:52.069494       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.253]
	
	
	==> kube-controller-manager [f163f250149afb625b34cc67c2a85b657a6c38717a194973b7406caf8b71afdb] <==
	I0420 00:17:10.499789       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-371738-m03\" does not exist"
	I0420 00:17:10.575576       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-371738-m03" podCIDRs=["10.244.2.0/24"]
	I0420 00:17:11.592960       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-371738-m03"
	I0420 00:17:34.756493       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="121.442733ms"
	I0420 00:17:34.807658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.006278ms"
	I0420 00:17:34.808188       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="209.589µs"
	I0420 00:17:34.915475       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.681066ms"
	I0420 00:17:35.260709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="345.061388ms"
	E0420 00:17:35.260954       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0420 00:17:35.319051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.288409ms"
	I0420 00:17:35.319573       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="302.179µs"
	I0420 00:17:36.014558       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.107µs"
	I0420 00:17:36.744156       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.643346ms"
	I0420 00:17:36.744278       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.315µs"
	I0420 00:17:36.821605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.429714ms"
	I0420 00:17:36.821809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.1µs"
	I0420 00:17:36.881374       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.949318ms"
	I0420 00:17:36.881513       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.966µs"
	I0420 00:18:14.293494       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-371738-m04\" does not exist"
	I0420 00:18:14.339520       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-371738-m04" podCIDRs=["10.244.3.0/24"]
	I0420 00:18:16.634871       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-371738-m04"
	I0420 00:18:22.508085       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-371738-m04"
	I0420 00:19:25.089895       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-371738-m04"
	I0420 00:19:25.195440       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.932878ms"
	I0420 00:19:25.195672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.218µs"
	
	
	==> kube-proxy [484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c] <==
	I0420 00:15:08.888287       1 server_linux.go:69] "Using iptables proxy"
	I0420 00:15:08.912769       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.217"]
	I0420 00:15:08.990690       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 00:15:08.990752       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 00:15:08.990775       1 server_linux.go:165] "Using iptables Proxier"
	I0420 00:15:08.995343       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 00:15:08.998186       1 server.go:872] "Version info" version="v1.30.0"
	I0420 00:15:08.998279       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:15:09.000576       1 config.go:192] "Starting service config controller"
	I0420 00:15:09.000623       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 00:15:09.000648       1 config.go:101] "Starting endpoint slice config controller"
	I0420 00:15:09.000652       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 00:15:09.001503       1 config.go:319] "Starting node config controller"
	I0420 00:15:09.001549       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 00:15:09.100821       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 00:15:09.100919       1 shared_informer.go:320] Caches are synced for service config
	I0420 00:15:09.102329       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9] <==
	W0420 00:14:51.208337       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 00:14:51.208493       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 00:14:51.222566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0420 00:14:51.222594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0420 00:14:51.232647       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 00:14:51.232706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 00:14:51.328436       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0420 00:14:51.328519       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0420 00:14:51.361732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0420 00:14:51.361814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0420 00:14:51.485854       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0420 00:14:51.485939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0420 00:14:51.589699       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 00:14:51.589796       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 00:14:51.602928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 00:14:51.603046       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0420 00:14:54.721944       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0420 00:17:10.621678       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ph4sb\": pod kindnet-ph4sb is already assigned to node \"ha-371738-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-ph4sb" node="ha-371738-m03"
	E0420 00:17:10.622142       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod d0786a22-e08e-4924-93b1-d8f3f34c9da7(kube-system/kindnet-ph4sb) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ph4sb"
	E0420 00:17:10.622424       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ph4sb\": pod kindnet-ph4sb is already assigned to node \"ha-371738-m03\"" pod="kube-system/kindnet-ph4sb"
	I0420 00:17:10.622531       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ph4sb" node="ha-371738-m03"
	E0420 00:18:14.434674       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mkslx\": pod kube-proxy-mkslx is already assigned to node \"ha-371738-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mkslx" node="ha-371738-m04"
	E0420 00:18:14.437803       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod e8ec373f-b6a5-4a0e-b0c2-51125d8da4f8(kube-system/kube-proxy-mkslx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-mkslx"
	E0420 00:18:14.437883       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mkslx\": pod kube-proxy-mkslx is already assigned to node \"ha-371738-m04\"" pod="kube-system/kube-proxy-mkslx"
	I0420 00:18:14.437935       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-mkslx" node="ha-371738-m04"
	
	
	==> kubelet <==
	Apr 20 00:17:56 ha-371738 kubelet[1375]: E0420 00:17:56.017702    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:17:56 ha-371738 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:17:56 ha-371738 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:17:56 ha-371738 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:17:56 ha-371738 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:18:56 ha-371738 kubelet[1375]: E0420 00:18:56.021358    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:18:56 ha-371738 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:18:56 ha-371738 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:18:56 ha-371738 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:18:56 ha-371738 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:19:56 ha-371738 kubelet[1375]: E0420 00:19:56.017277    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:19:56 ha-371738 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:19:56 ha-371738 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:19:56 ha-371738 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:19:56 ha-371738 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:20:56 ha-371738 kubelet[1375]: E0420 00:20:56.016277    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:20:56 ha-371738 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:20:56 ha-371738 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:20:56 ha-371738 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:20:56 ha-371738 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:21:56 ha-371738 kubelet[1375]: E0420 00:21:56.015070    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:21:56 ha-371738 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:21:56 ha-371738 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:21:56 ha-371738 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:21:56 ha-371738 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-371738 -n ha-371738
helpers_test.go:261: (dbg) Run:  kubectl --context ha-371738 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (56.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (365.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-371738 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-371738 -v=7 --alsologtostderr
E0420 00:23:11.657572   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:23:39.341915   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-371738 -v=7 --alsologtostderr: exit status 82 (2m2.715336487s)

                                                
                                                
-- stdout --
	* Stopping node "ha-371738-m04"  ...
	* Stopping node "ha-371738-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 00:22:06.462328  100399 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:22:06.462471  100399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:22:06.462483  100399 out.go:304] Setting ErrFile to fd 2...
	I0420 00:22:06.462490  100399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:22:06.462749  100399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:22:06.463047  100399 out.go:298] Setting JSON to false
	I0420 00:22:06.463140  100399 mustload.go:65] Loading cluster: ha-371738
	I0420 00:22:06.463566  100399 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:22:06.463693  100399 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:22:06.463959  100399 mustload.go:65] Loading cluster: ha-371738
	I0420 00:22:06.464167  100399 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:22:06.464216  100399 stop.go:39] StopHost: ha-371738-m04
	I0420 00:22:06.464642  100399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:22:06.464692  100399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:22:06.480744  100399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38323
	I0420 00:22:06.481212  100399 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:22:06.481935  100399 main.go:141] libmachine: Using API Version  1
	I0420 00:22:06.481965  100399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:22:06.482294  100399 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:22:06.484945  100399 out.go:177] * Stopping node "ha-371738-m04"  ...
	I0420 00:22:06.486933  100399 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0420 00:22:06.486963  100399 main.go:141] libmachine: (ha-371738-m04) Calling .DriverName
	I0420 00:22:06.487184  100399 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0420 00:22:06.487223  100399 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHHostname
	I0420 00:22:06.489782  100399 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:22:06.490181  100399 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:17:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:22:06.490219  100399 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:22:06.490370  100399 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHPort
	I0420 00:22:06.490557  100399 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHKeyPath
	I0420 00:22:06.490730  100399 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHUsername
	I0420 00:22:06.490877  100399 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m04/id_rsa Username:docker}
	I0420 00:22:06.577762  100399 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0420 00:22:06.632811  100399 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0420 00:22:06.688442  100399 main.go:141] libmachine: Stopping "ha-371738-m04"...
	I0420 00:22:06.688474  100399 main.go:141] libmachine: (ha-371738-m04) Calling .GetState
	I0420 00:22:06.690047  100399 main.go:141] libmachine: (ha-371738-m04) Calling .Stop
	I0420 00:22:06.693568  100399 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 0/120
	I0420 00:22:07.695661  100399 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 1/120
	I0420 00:22:08.697721  100399 main.go:141] libmachine: (ha-371738-m04) Calling .GetState
	I0420 00:22:08.699313  100399 main.go:141] libmachine: Machine "ha-371738-m04" was stopped.
	I0420 00:22:08.699334  100399 stop.go:75] duration metric: took 2.212402398s to stop
	I0420 00:22:08.699359  100399 stop.go:39] StopHost: ha-371738-m03
	I0420 00:22:08.699654  100399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:22:08.699707  100399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:22:08.716575  100399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I0420 00:22:08.717162  100399 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:22:08.717698  100399 main.go:141] libmachine: Using API Version  1
	I0420 00:22:08.717721  100399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:22:08.718066  100399 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:22:08.719930  100399 out.go:177] * Stopping node "ha-371738-m03"  ...
	I0420 00:22:08.721024  100399 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0420 00:22:08.721046  100399 main.go:141] libmachine: (ha-371738-m03) Calling .DriverName
	I0420 00:22:08.721235  100399 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0420 00:22:08.721253  100399 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHHostname
	I0420 00:22:08.723822  100399 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:22:08.724271  100399 main.go:141] libmachine: (ha-371738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e5:aa", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:16:34 +0000 UTC Type:0 Mac:52:54:00:cc:e5:aa Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-371738-m03 Clientid:01:52:54:00:cc:e5:aa}
	I0420 00:22:08.724305  100399 main.go:141] libmachine: (ha-371738-m03) DBG | domain ha-371738-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:cc:e5:aa in network mk-ha-371738
	I0420 00:22:08.724428  100399 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHPort
	I0420 00:22:08.724594  100399 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHKeyPath
	I0420 00:22:08.724766  100399 main.go:141] libmachine: (ha-371738-m03) Calling .GetSSHUsername
	I0420 00:22:08.724933  100399 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m03/id_rsa Username:docker}
	I0420 00:22:08.809572  100399 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0420 00:22:08.863698  100399 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0420 00:22:08.920180  100399 main.go:141] libmachine: Stopping "ha-371738-m03"...
	I0420 00:22:08.920249  100399 main.go:141] libmachine: (ha-371738-m03) Calling .GetState
	I0420 00:22:08.921857  100399 main.go:141] libmachine: (ha-371738-m03) Calling .Stop
	I0420 00:22:08.925339  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 0/120
	I0420 00:22:09.926621  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 1/120
	I0420 00:22:10.927975  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 2/120
	I0420 00:22:11.929288  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 3/120
	I0420 00:22:12.930638  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 4/120
	I0420 00:22:13.933098  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 5/120
	I0420 00:22:14.934937  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 6/120
	I0420 00:22:15.936879  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 7/120
	I0420 00:22:16.938348  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 8/120
	I0420 00:22:17.940097  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 9/120
	I0420 00:22:18.942043  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 10/120
	I0420 00:22:19.943815  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 11/120
	I0420 00:22:20.945464  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 12/120
	I0420 00:22:21.947721  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 13/120
	I0420 00:22:22.950085  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 14/120
	I0420 00:22:23.951726  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 15/120
	I0420 00:22:24.953257  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 16/120
	I0420 00:22:25.954616  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 17/120
	I0420 00:22:26.956106  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 18/120
	I0420 00:22:27.957540  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 19/120
	I0420 00:22:28.959195  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 20/120
	I0420 00:22:29.960705  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 21/120
	I0420 00:22:30.962160  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 22/120
	I0420 00:22:31.963790  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 23/120
	I0420 00:22:32.965325  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 24/120
	I0420 00:22:33.967452  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 25/120
	I0420 00:22:34.968850  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 26/120
	I0420 00:22:35.970455  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 27/120
	I0420 00:22:36.972357  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 28/120
	I0420 00:22:37.973950  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 29/120
	I0420 00:22:38.976125  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 30/120
	I0420 00:22:39.977571  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 31/120
	I0420 00:22:40.979085  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 32/120
	I0420 00:22:41.980952  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 33/120
	I0420 00:22:42.982633  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 34/120
	I0420 00:22:43.984226  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 35/120
	I0420 00:22:44.985597  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 36/120
	I0420 00:22:45.987903  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 37/120
	I0420 00:22:46.989152  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 38/120
	I0420 00:22:47.990648  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 39/120
	I0420 00:22:48.992298  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 40/120
	I0420 00:22:49.993606  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 41/120
	I0420 00:22:50.994840  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 42/120
	I0420 00:22:51.996175  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 43/120
	I0420 00:22:52.997658  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 44/120
	I0420 00:22:53.999280  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 45/120
	I0420 00:22:55.000828  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 46/120
	I0420 00:22:56.002264  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 47/120
	I0420 00:22:57.003616  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 48/120
	I0420 00:22:58.004947  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 49/120
	I0420 00:22:59.006729  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 50/120
	I0420 00:23:00.008061  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 51/120
	I0420 00:23:01.009453  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 52/120
	I0420 00:23:02.010701  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 53/120
	I0420 00:23:03.011964  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 54/120
	I0420 00:23:04.013725  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 55/120
	I0420 00:23:05.015240  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 56/120
	I0420 00:23:06.016590  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 57/120
	I0420 00:23:07.017979  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 58/120
	I0420 00:23:08.019822  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 59/120
	I0420 00:23:09.021049  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 60/120
	I0420 00:23:10.022396  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 61/120
	I0420 00:23:11.023672  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 62/120
	I0420 00:23:12.025400  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 63/120
	I0420 00:23:13.026605  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 64/120
	I0420 00:23:14.028547  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 65/120
	I0420 00:23:15.030122  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 66/120
	I0420 00:23:16.031808  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 67/120
	I0420 00:23:17.033204  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 68/120
	I0420 00:23:18.034437  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 69/120
	I0420 00:23:19.036000  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 70/120
	I0420 00:23:20.037784  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 71/120
	I0420 00:23:21.039144  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 72/120
	I0420 00:23:22.040551  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 73/120
	I0420 00:23:23.041967  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 74/120
	I0420 00:23:24.043505  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 75/120
	I0420 00:23:25.045087  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 76/120
	I0420 00:23:26.046446  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 77/120
	I0420 00:23:27.047938  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 78/120
	I0420 00:23:28.049256  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 79/120
	I0420 00:23:29.051539  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 80/120
	I0420 00:23:30.052922  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 81/120
	I0420 00:23:31.054340  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 82/120
	I0420 00:23:32.055839  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 83/120
	I0420 00:23:33.057069  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 84/120
	I0420 00:23:34.059014  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 85/120
	I0420 00:23:35.060519  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 86/120
	I0420 00:23:36.061812  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 87/120
	I0420 00:23:37.063127  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 88/120
	I0420 00:23:38.064469  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 89/120
	I0420 00:23:39.066515  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 90/120
	I0420 00:23:40.068869  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 91/120
	I0420 00:23:41.070238  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 92/120
	I0420 00:23:42.071779  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 93/120
	I0420 00:23:43.073054  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 94/120
	I0420 00:23:44.074696  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 95/120
	I0420 00:23:45.076178  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 96/120
	I0420 00:23:46.077590  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 97/120
	I0420 00:23:47.079151  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 98/120
	I0420 00:23:48.080453  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 99/120
	I0420 00:23:49.082134  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 100/120
	I0420 00:23:50.083351  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 101/120
	I0420 00:23:51.084528  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 102/120
	I0420 00:23:52.085932  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 103/120
	I0420 00:23:53.087411  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 104/120
	I0420 00:23:54.088725  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 105/120
	I0420 00:23:55.089939  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 106/120
	I0420 00:23:56.091833  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 107/120
	I0420 00:23:57.093166  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 108/120
	I0420 00:23:58.094490  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 109/120
	I0420 00:23:59.096294  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 110/120
	I0420 00:24:00.097928  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 111/120
	I0420 00:24:01.099156  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 112/120
	I0420 00:24:02.100534  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 113/120
	I0420 00:24:03.102670  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 114/120
	I0420 00:24:04.104382  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 115/120
	I0420 00:24:05.105679  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 116/120
	I0420 00:24:06.106893  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 117/120
	I0420 00:24:07.108438  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 118/120
	I0420 00:24:08.109845  100399 main.go:141] libmachine: (ha-371738-m03) Waiting for machine to stop 119/120
	I0420 00:24:09.110817  100399 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0420 00:24:09.110891  100399 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0420 00:24:09.113282  100399 out.go:177] 
	W0420 00:24:09.114976  100399 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0420 00:24:09.115005  100399 out.go:239] * 
	* 
	W0420 00:24:09.118670  100399 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0420 00:24:09.119952  100399 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-371738 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-371738 --wait=true -v=7 --alsologtostderr
E0420 00:25:27.815612   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 00:26:50.862862   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-371738 --wait=true -v=7 --alsologtostderr: (4m0.451000603s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-371738
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-371738 -n ha-371738
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 logs -n 25
E0420 00:28:11.658033   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-371738 logs -n 25: (1.990693332s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-371738 cp ha-371738-m03:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m02:/home/docker/cp-test_ha-371738-m03_ha-371738-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738-m02 sudo cat                                          | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m03_ha-371738-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m03:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04:/home/docker/cp-test_ha-371738-m03_ha-371738-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738-m04 sudo cat                                          | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m03_ha-371738-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-371738 cp testdata/cp-test.txt                                                | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3122242891/001/cp-test_ha-371738-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738:/home/docker/cp-test_ha-371738-m04_ha-371738.txt                       |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738 sudo cat                                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m04_ha-371738.txt                                 |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m02:/home/docker/cp-test_ha-371738-m04_ha-371738-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738-m02 sudo cat                                          | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m04_ha-371738-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m03:/home/docker/cp-test_ha-371738-m04_ha-371738-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738-m03 sudo cat                                          | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m04_ha-371738-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-371738 node stop m02 -v=7                                                     | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-371738 node start m02 -v=7                                                    | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-371738 -v=7                                                           | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-371738 -v=7                                                                | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-371738 --wait=true -v=7                                                    | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:24 UTC | 20 Apr 24 00:28 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-371738                                                                | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:28 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 00:24:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 00:24:09.181716  100866 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:24:09.181839  100866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:24:09.181848  100866 out.go:304] Setting ErrFile to fd 2...
	I0420 00:24:09.181853  100866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:24:09.182059  100866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:24:09.182691  100866 out.go:298] Setting JSON to false
	I0420 00:24:09.183586  100866 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":11196,"bootTime":1713561453,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 00:24:09.183653  100866 start.go:139] virtualization: kvm guest
	I0420 00:24:09.186051  100866 out.go:177] * [ha-371738] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 00:24:09.187914  100866 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 00:24:09.187886  100866 notify.go:220] Checking for updates...
	I0420 00:24:09.189340  100866 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 00:24:09.190540  100866 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 00:24:09.191762  100866 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:24:09.192986  100866 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 00:24:09.194367  100866 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 00:24:09.195987  100866 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:24:09.196096  100866 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 00:24:09.196519  100866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:24:09.196571  100866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:24:09.211709  100866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40039
	I0420 00:24:09.212095  100866 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:24:09.212637  100866 main.go:141] libmachine: Using API Version  1
	I0420 00:24:09.212660  100866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:24:09.213016  100866 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:24:09.213243  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:24:09.247629  100866 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 00:24:09.248959  100866 start.go:297] selected driver: kvm2
	I0420 00:24:09.248974  100866 start.go:901] validating driver "kvm2" against &{Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default A
PIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false head
lamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:24:09.249096  100866 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 00:24:09.249431  100866 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 00:24:09.249506  100866 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 00:24:09.263932  100866 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 00:24:09.264832  100866 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:24:09.264907  100866 cni.go:84] Creating CNI manager for ""
	I0420 00:24:09.264930  100866 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0420 00:24:09.265001  100866 start.go:340] cluster config:
	{Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:24:09.265160  100866 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 00:24:09.267560  100866 out.go:177] * Starting "ha-371738" primary control-plane node in "ha-371738" cluster
	I0420 00:24:09.269067  100866 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:24:09.269101  100866 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0420 00:24:09.269112  100866 cache.go:56] Caching tarball of preloaded images
	I0420 00:24:09.269211  100866 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 00:24:09.269223  100866 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 00:24:09.269399  100866 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:24:09.269606  100866 start.go:360] acquireMachinesLock for ha-371738: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 00:24:09.269671  100866 start.go:364] duration metric: took 45.599µs to acquireMachinesLock for "ha-371738"
	I0420 00:24:09.269692  100866 start.go:96] Skipping create...Using existing machine configuration
	I0420 00:24:09.269702  100866 fix.go:54] fixHost starting: 
	I0420 00:24:09.269954  100866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:24:09.269992  100866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:24:09.283515  100866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37293
	I0420 00:24:09.283948  100866 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:24:09.284473  100866 main.go:141] libmachine: Using API Version  1
	I0420 00:24:09.284499  100866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:24:09.284783  100866 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:24:09.284944  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:24:09.285097  100866 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:24:09.286700  100866 fix.go:112] recreateIfNeeded on ha-371738: state=Running err=<nil>
	W0420 00:24:09.286718  100866 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 00:24:09.289384  100866 out.go:177] * Updating the running kvm2 "ha-371738" VM ...
	I0420 00:24:09.290616  100866 machine.go:94] provisionDockerMachine start ...
	I0420 00:24:09.290661  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:24:09.290899  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:24:09.293923  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.294440  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:24:09.294469  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.294672  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:24:09.294933  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.295130  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.295291  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:24:09.295487  100866 main.go:141] libmachine: Using SSH client type: native
	I0420 00:24:09.295659  100866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:24:09.295669  100866 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 00:24:09.415198  100866 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-371738
	
	I0420 00:24:09.415234  100866 main.go:141] libmachine: (ha-371738) Calling .GetMachineName
	I0420 00:24:09.415503  100866 buildroot.go:166] provisioning hostname "ha-371738"
	I0420 00:24:09.415534  100866 main.go:141] libmachine: (ha-371738) Calling .GetMachineName
	I0420 00:24:09.415751  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:24:09.418451  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.418843  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:24:09.418879  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.419059  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:24:09.419228  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.419361  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.419450  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:24:09.419554  100866 main.go:141] libmachine: Using SSH client type: native
	I0420 00:24:09.419719  100866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:24:09.419732  100866 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-371738 && echo "ha-371738" | sudo tee /etc/hostname
	I0420 00:24:09.554268  100866 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-371738
	
	I0420 00:24:09.554304  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:24:09.557338  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.557763  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:24:09.557787  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.557981  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:24:09.558195  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.558370  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.558552  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:24:09.558757  100866 main.go:141] libmachine: Using SSH client type: native
	I0420 00:24:09.558933  100866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:24:09.558950  100866 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-371738' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-371738/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-371738' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 00:24:09.678695  100866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:24:09.678735  100866 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 00:24:09.678763  100866 buildroot.go:174] setting up certificates
	I0420 00:24:09.678774  100866 provision.go:84] configureAuth start
	I0420 00:24:09.678783  100866 main.go:141] libmachine: (ha-371738) Calling .GetMachineName
	I0420 00:24:09.679073  100866 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:24:09.681933  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.682332  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:24:09.682362  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.682496  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:24:09.684847  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.685202  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:24:09.685218  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.685367  100866 provision.go:143] copyHostCerts
	I0420 00:24:09.685392  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:24:09.685434  100866 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 00:24:09.685446  100866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:24:09.685532  100866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 00:24:09.685628  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:24:09.685646  100866 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 00:24:09.685650  100866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:24:09.685679  100866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 00:24:09.685799  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:24:09.685821  100866 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 00:24:09.685826  100866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:24:09.685852  100866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 00:24:09.685904  100866 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.ha-371738 san=[127.0.0.1 192.168.39.217 ha-371738 localhost minikube]
	I0420 00:24:09.808106  100866 provision.go:177] copyRemoteCerts
	I0420 00:24:09.808171  100866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 00:24:09.808195  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:24:09.810886  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.811262  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:24:09.811284  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.811508  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:24:09.811725  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.811873  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:24:09.812010  100866 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:24:09.905459  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0420 00:24:09.905522  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 00:24:09.936017  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0420 00:24:09.936102  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0420 00:24:09.964960  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0420 00:24:09.965014  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 00:24:09.993992  100866 provision.go:87] duration metric: took 315.201574ms to configureAuth
	I0420 00:24:09.994021  100866 buildroot.go:189] setting minikube options for container-runtime
	I0420 00:24:09.994263  100866 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:24:09.994345  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:24:09.997101  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.997564  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:24:09.997593  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.997806  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:24:09.998000  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.998178  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.998328  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:24:09.998485  100866 main.go:141] libmachine: Using SSH client type: native
	I0420 00:24:09.998673  100866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:24:09.998690  100866 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 00:25:40.874739  100866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 00:25:40.874774  100866 machine.go:97] duration metric: took 1m31.584144057s to provisionDockerMachine
	I0420 00:25:40.874789  100866 start.go:293] postStartSetup for "ha-371738" (driver="kvm2")
	I0420 00:25:40.874799  100866 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 00:25:40.874816  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:25:40.875198  100866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 00:25:40.875235  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:25:40.878657  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:40.879190  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:25:40.879218  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:40.879373  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:25:40.879593  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:25:40.879866  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:25:40.880030  100866 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:25:40.969290  100866 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 00:25:40.974265  100866 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 00:25:40.974295  100866 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 00:25:40.974367  100866 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 00:25:40.974461  100866 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 00:25:40.974476  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /etc/ssl/certs/837422.pem
	I0420 00:25:40.974587  100866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 00:25:40.985507  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:25:41.013076  100866 start.go:296] duration metric: took 138.273677ms for postStartSetup
	I0420 00:25:41.013130  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:25:41.013503  100866 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0420 00:25:41.013529  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:25:41.016493  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.016922  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:25:41.016951  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.017096  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:25:41.017347  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:25:41.017495  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:25:41.017621  100866 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	W0420 00:25:41.103730  100866 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0420 00:25:41.103755  100866 fix.go:56] duration metric: took 1m31.834054018s for fixHost
	I0420 00:25:41.103776  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:25:41.106241  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.106734  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:25:41.106764  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.106894  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:25:41.107083  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:25:41.107254  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:25:41.107455  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:25:41.107603  100866 main.go:141] libmachine: Using SSH client type: native
	I0420 00:25:41.107814  100866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:25:41.107826  100866 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 00:25:41.222515  100866 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713572741.184353502
	
	I0420 00:25:41.222547  100866 fix.go:216] guest clock: 1713572741.184353502
	I0420 00:25:41.222558  100866 fix.go:229] Guest: 2024-04-20 00:25:41.184353502 +0000 UTC Remote: 2024-04-20 00:25:41.103762097 +0000 UTC m=+91.973048737 (delta=80.591405ms)
	I0420 00:25:41.222588  100866 fix.go:200] guest clock delta is within tolerance: 80.591405ms
	I0420 00:25:41.222597  100866 start.go:83] releasing machines lock for "ha-371738", held for 1m31.952913361s
	I0420 00:25:41.222626  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:25:41.222989  100866 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:25:41.225645  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.226091  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:25:41.226125  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.226272  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:25:41.226784  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:25:41.226979  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:25:41.227071  100866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 00:25:41.227117  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:25:41.227217  100866 ssh_runner.go:195] Run: cat /version.json
	I0420 00:25:41.227248  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:25:41.229815  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.230132  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:25:41.230172  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.230196  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.230306  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:25:41.230510  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:25:41.230566  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:25:41.230591  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.230676  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:25:41.230750  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:25:41.230829  100866 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:25:41.230891  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:25:41.231043  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:25:41.231187  100866 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:25:41.316002  100866 ssh_runner.go:195] Run: systemctl --version
	I0420 00:25:41.343209  100866 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 00:25:41.524166  100866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 00:25:41.537921  100866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 00:25:41.537981  100866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 00:25:41.547984  100866 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0420 00:25:41.548010  100866 start.go:494] detecting cgroup driver to use...
	I0420 00:25:41.548078  100866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 00:25:41.566253  100866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 00:25:41.580587  100866 docker.go:217] disabling cri-docker service (if available) ...
	I0420 00:25:41.580641  100866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 00:25:41.594871  100866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 00:25:41.609428  100866 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 00:25:41.816703  100866 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 00:25:41.992664  100866 docker.go:233] disabling docker service ...
	I0420 00:25:41.992750  100866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 00:25:42.014406  100866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 00:25:42.043040  100866 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 00:25:42.227819  100866 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 00:25:42.416473  100866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 00:25:42.438563  100866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 00:25:42.461068  100866 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 00:25:42.461138  100866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:25:42.472488  100866 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 00:25:42.472556  100866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:25:42.483609  100866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:25:42.494702  100866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:25:42.505772  100866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 00:25:42.517260  100866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:25:42.528580  100866 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:25:42.540491  100866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:25:42.551997  100866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 00:25:42.562715  100866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 00:25:42.573048  100866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:25:42.737013  100866 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 00:25:43.138669  100866 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 00:25:43.138738  100866 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 00:25:43.145229  100866 start.go:562] Will wait 60s for crictl version
	I0420 00:25:43.145292  100866 ssh_runner.go:195] Run: which crictl
	I0420 00:25:43.150054  100866 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 00:25:43.194670  100866 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 00:25:43.194763  100866 ssh_runner.go:195] Run: crio --version
	I0420 00:25:43.226380  100866 ssh_runner.go:195] Run: crio --version
	I0420 00:25:43.261092  100866 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 00:25:43.262436  100866 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:25:43.264949  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:43.265302  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:25:43.265341  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:43.265542  100866 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0420 00:25:43.270790  100866 kubeadm.go:877] updating cluster {Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:1
92.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 00:25:43.270924  100866 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:25:43.270973  100866 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:25:43.320649  100866 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 00:25:43.320672  100866 crio.go:433] Images already preloaded, skipping extraction
	I0420 00:25:43.320722  100866 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:25:43.436669  100866 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 00:25:43.436699  100866 cache_images.go:84] Images are preloaded, skipping loading
	I0420 00:25:43.436712  100866 kubeadm.go:928] updating node { 192.168.39.217 8443 v1.30.0 crio true true} ...
	I0420 00:25:43.436849  100866 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-371738 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 00:25:43.436939  100866 ssh_runner.go:195] Run: crio config
	I0420 00:25:43.643509  100866 cni.go:84] Creating CNI manager for ""
	I0420 00:25:43.643532  100866 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0420 00:25:43.643545  100866 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 00:25:43.643572  100866 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-371738 NodeName:ha-371738 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 00:25:43.643767  100866 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-371738"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 00:25:43.643793  100866 kube-vip.go:111] generating kube-vip config ...
	I0420 00:25:43.643860  100866 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0420 00:25:43.688493  100866 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0420 00:25:43.688677  100866 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0420 00:25:43.688755  100866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 00:25:43.798966  100866 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 00:25:43.799047  100866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0420 00:25:43.991880  100866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0420 00:25:44.158652  100866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 00:25:44.448421  100866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0420 00:25:44.490941  100866 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0420 00:25:44.550137  100866 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0420 00:25:44.563870  100866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:25:44.913689  100866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:25:45.070624  100866 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738 for IP: 192.168.39.217
	I0420 00:25:45.070649  100866 certs.go:194] generating shared ca certs ...
	I0420 00:25:45.070670  100866 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:25:45.070836  100866 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 00:25:45.070888  100866 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 00:25:45.070902  100866 certs.go:256] generating profile certs ...
	I0420 00:25:45.071126  100866 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key
	I0420 00:25:45.071171  100866 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.6d7bd836
	I0420 00:25:45.071195  100866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.6d7bd836 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.48 192.168.39.253 192.168.39.254]
	I0420 00:25:45.153131  100866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.6d7bd836 ...
	I0420 00:25:45.153160  100866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.6d7bd836: {Name:mkbc37b952bd6a1c868dc8556da9c440274c7ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:25:45.153333  100866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.6d7bd836 ...
	I0420 00:25:45.153344  100866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.6d7bd836: {Name:mk5ee993a93be74423a5fcd6d4233c0e060bec55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:25:45.153416  100866 certs.go:381] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.6d7bd836 -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt
	I0420 00:25:45.153567  100866 certs.go:385] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.6d7bd836 -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key
	I0420 00:25:45.153697  100866 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key
	I0420 00:25:45.153714  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0420 00:25:45.153728  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0420 00:25:45.153741  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0420 00:25:45.153754  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0420 00:25:45.153766  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0420 00:25:45.153778  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0420 00:25:45.153789  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0420 00:25:45.153801  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0420 00:25:45.153851  100866 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 00:25:45.153878  100866 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 00:25:45.153889  100866 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 00:25:45.153911  100866 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 00:25:45.153932  100866 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 00:25:45.153953  100866 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 00:25:45.153989  100866 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:25:45.154017  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /usr/share/ca-certificates/837422.pem
	I0420 00:25:45.154031  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:25:45.154044  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem -> /usr/share/ca-certificates/83742.pem
	I0420 00:25:45.154672  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 00:25:45.251999  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 00:25:45.297557  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 00:25:45.334860  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 00:25:45.382519  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0420 00:25:45.417717  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0420 00:25:45.452263  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 00:25:45.485507  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 00:25:45.516847  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 00:25:45.546745  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 00:25:45.578765  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 00:25:45.614949  100866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 00:25:45.638686  100866 ssh_runner.go:195] Run: openssl version
	I0420 00:25:45.645956  100866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 00:25:45.662510  100866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 00:25:45.667960  100866 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 00:25:45.668017  100866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 00:25:45.676567  100866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 00:25:45.692548  100866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 00:25:45.707809  100866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 00:25:45.713864  100866 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 00:25:45.713915  100866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 00:25:45.720730  100866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 00:25:45.734551  100866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 00:25:45.748291  100866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:25:45.753579  100866 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:25:45.753629  100866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:25:45.763413  100866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 00:25:45.777863  100866 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 00:25:45.783623  100866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 00:25:45.791284  100866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 00:25:45.799812  100866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 00:25:45.807849  100866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 00:25:45.814342  100866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 00:25:45.820697  100866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 00:25:45.827377  100866 kubeadm.go:391] StartCluster: {Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.
168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false hel
m-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:25:45.827481  100866 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 00:25:45.827534  100866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 00:25:45.918370  100866 cri.go:89] found id: "988e0e16ce68f3643b9af0d65ea12352ab0639e773ab1352b0a8151eb51f8626"
	I0420 00:25:45.918397  100866 cri.go:89] found id: "8657d9bf44d968fb405af2d73a04c2887cf209c19811cc20256b9f4e6230c71a"
	I0420 00:25:45.918403  100866 cri.go:89] found id: "b501a33161b99652e0e199689a5c78dd689f7e56b62656760965fdca22ec9e6f"
	I0420 00:25:45.918408  100866 cri.go:89] found id: "14e36bfb114f2bd2d7fc4262b41df0df3a85d79e4c6a533577e909a0e46e0a80"
	I0420 00:25:45.918412  100866 cri.go:89] found id: "97b8c9a163319f08eb69e441dc04e555623c9a6fef77426e633b17dfe6ca7748"
	I0420 00:25:45.918416  100866 cri.go:89] found id: "323c8a3e2bb2ea2ed1ecc1b2b0394c0a2f8bd196950bfff76a8d5d6292d348bb"
	I0420 00:25:45.918420  100866 cri.go:89] found id: "7503e9373d138e5d2e23128934a5da5fd17cde8052cfdc2ccb8ea63ef43b5d37"
	I0420 00:25:45.918423  100866 cri.go:89] found id: "0aa3be068585a3c4374d1e7e092e2ec838b2b02090d194e30d2c924030aa9509"
	I0420 00:25:45.918427  100866 cri.go:89] found id: "b90a66605161e3e4d5e9517fd1c01ea9e9aa4354ad448006470591f9cb7eb927"
	I0420 00:25:45.918439  100866 cri.go:89] found id: "0f3c494d087c330d86ae9524f1ee4fbc2f2b52dc84c7babc65df9d0767fc394d"
	I0420 00:25:45.918444  100866 cri.go:89] found id: "0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf"
	I0420 00:25:45.918448  100866 cri.go:89] found id: "a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c"
	I0420 00:25:45.918452  100866 cri.go:89] found id: "484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c"
	I0420 00:25:45.918456  100866 cri.go:89] found id: "c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0"
	I0420 00:25:45.918462  100866 cri.go:89] found id: "f163f250149afb625b34cc67c2a85b657a6c38717a194973b7406caf8b71afdb"
	I0420 00:25:45.918466  100866 cri.go:89] found id: "c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9"
	I0420 00:25:45.918473  100866 cri.go:89] found id: ""
	I0420 00:25:45.918532  100866 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.388644971Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70cbd074-8cdb-4aa9-ae91-08e4a41f5fbc name=/runtime.v1.RuntimeService/Version
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.390349269Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0fc82d8-9b1b-47b5-af27-77778cefd05d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.390753766Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713572890390721929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0fc82d8-9b1b-47b5-af27-77778cefd05d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.391390780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6aa3eacc-0b6e-4085-9ae6-1d2a2258cc01 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.391502564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6aa3eacc-0b6e-4085-9ae6-1d2a2258cc01 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.391990981Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac855ed56c2e46b910c6f0e29306cd74994e0b52011d6e705d82d492d9434235,PodSandboxId:1b488473663a90a6cc14f775612ba568db9924f2e8ec0e9d52049e5b6da10ce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713572843021809392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6336035c08e95e415f73490de9a3dab8a520c846415666117bb1e7e4ff497e1d,PodSandboxId:6176436eca7ffb5fff720d942de5cf0c751e7751f942a46f8b3e9c39211da722,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713572787014729451,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernetes.container.hash: dd367de8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919081b91e5bbc00fd283a7cc9a6268f1e14b692b56b061e2b21b046a9580fd9,PodSandboxId:9ab77018e92d9ff6ca9244fcd5466f24520859fc0aa3f4bc93eb08f6d0787568,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713572784013589195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9a7bfac3a904c015a4a564bcd8fa8210619f2e14534023a7f93fe8f1c138,PodSandboxId:36cad30ca97d8fe4bf9874a7fee12528dfa08094b07de876ba9d2fd93999d58e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713572777433824081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91975a1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ef31e00ee2c05d749141026c75ec4447f79a380ae00cfbe806380a29e63c58,PodSandboxId:8b198b23f71838e7b14bc8c4e3b718bb1c6d216ffaa5ec219995ea9b4f4c7c7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713572776618832806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotations:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3eebe566cbfae552db6d06e3ce878cd578cace4ac3b8b5be71bf4bd9ff6666a,PodSandboxId:20793aefea29bb8e40db2a4ce691cdd3630bf764651d365aed83ed198fe8e024,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713572762105875303,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68c3bb304e506c6468b1d2cd5dcafae,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:c79aa579b0cae6036a46a7b1aaa75e08bbf35641467073d52465b5f88d81b40d,PodSandboxId:e56329660a79e2c3c8e44ab2dfd633e9f0e3186ad73a4941fd477d411d87249a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713572744236067312,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:9744a7e64b6dfe1c7ddf060cff8b03b0cf3023d30edce9ba4525f223b1cd0b94,PodSandboxId:1b488473663a90a6cc14f775612ba568db9924f2e8ec0e9d52049e5b6da10ce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713572744356481192,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:988e0e16c
e68f3643b9af0d65ea12352ab0639e773ab1352b0a8151eb51f8626,PodSandboxId:0ffa8680c564a324c1dade6f45f502434ec574f667d3abb7075c5524977129a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572744843997907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8657d9bf44d968fb405af2d73a04c2887cf209c19811cc20256b9f4e6230c71a,PodSandboxId:0e08629c20763929cb9013c6a0c063dd3d2ef275020516b2b9618b2e44aaca3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572744831575113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e36bfb114f2bd2d7fc4262b41df0df3a85d79e4c6a533577e909a0e46e0a80,PodSandboxId:3fae8202a1068a4fafc249c58f659b6878754ca5319a2becf15f0a93fff5631f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713572744185281385,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b501a33161b99652e0e199689a5c78dd689f7e56b62656760965fdca22ec9e6f,PodSandboxId:077e956a7dccc6b6b9caf01533ba20b013a217e51a01b45d743b560615453526,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713572744205398446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af96
16002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b8c9a163319f08eb69e441dc04e555623c9a6fef77426e633b17dfe6ca7748,PodSandboxId:8b198b23f71838e7b14bc8c4e3b718bb1c6d216ffaa5ec219995ea9b4f4c7c7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713572744006298165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotation
s:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c8a3e2bb2ea2ed1ecc1b2b0394c0a2f8bd196950bfff76a8d5d6292d348bb,PodSandboxId:9ab77018e92d9ff6ca9244fcd5466f24520859fc0aa3f4bc93eb08f6d0787568,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713572743991218482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annota
tions:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7503e9373d138e5d2e23128934a5da5fd17cde8052cfdc2ccb8ea63ef43b5d37,PodSandboxId:a7ec7ade955ff3362448ed7381df73019003e38d57dc42f24b3e4dffda16cff2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713572742041253686,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernet
es.container.hash: dd367de8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee362bb57c39b48e30473ee01be65a12508f89000c04664e9d4cb00eead48881,PodSandboxId:2952502d79ed7046fb6c936e2cdcaac06d274a1af6bb0f72625bb9c7849a53af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713572256398294941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes
.container.hash: 91975a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf,PodSandboxId:96b6f46faf7987503503c406f518a352cf828470aaa2857fdc4e9580eee7d3ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713572112401908790,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c,PodSandboxId:6951735c94141fbea313e44ff72fab10529f03b1ba6dc664543c35ed8b0e7c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713572112310427901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c,PodSandboxId:78d8eb3f68b710cf8ae3ebc45873b48e07019b5e4d7efd0b56e62a4513be110c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431f
ceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713572108700751072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0,PodSandboxId:6b52fa6b93c1b7e8f8537088635da6d0cb7b5bb9091002379c8f7b848af01e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691
a75a899,State:CONTAINER_EXITED,CreatedAt:1713572087052357246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9,PodSandboxId:6c0d855406f87897ca0924505087fcfdf3cb0d5eaf2fcde6c237b42f6d3ffd82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:17
13572086953871804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6aa3eacc-0b6e-4085-9ae6-1d2a2258cc01 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.446243447Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d20f502-5219-4e8a-923d-cc2f81287ee4 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.446388475Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d20f502-5219-4e8a-923d-cc2f81287ee4 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.448521795Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b4e1c16a-2372-46ee-af2e-ea18561befa8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.448963928Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713572890448941655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b4e1c16a-2372-46ee-af2e-ea18561befa8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.450390472Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fd95e8a-8841-4272-8656-4d48c80c8867 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.450443631Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fd95e8a-8841-4272-8656-4d48c80c8867 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.451970586Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac855ed56c2e46b910c6f0e29306cd74994e0b52011d6e705d82d492d9434235,PodSandboxId:1b488473663a90a6cc14f775612ba568db9924f2e8ec0e9d52049e5b6da10ce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713572843021809392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6336035c08e95e415f73490de9a3dab8a520c846415666117bb1e7e4ff497e1d,PodSandboxId:6176436eca7ffb5fff720d942de5cf0c751e7751f942a46f8b3e9c39211da722,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713572787014729451,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernetes.container.hash: dd367de8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919081b91e5bbc00fd283a7cc9a6268f1e14b692b56b061e2b21b046a9580fd9,PodSandboxId:9ab77018e92d9ff6ca9244fcd5466f24520859fc0aa3f4bc93eb08f6d0787568,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713572784013589195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9a7bfac3a904c015a4a564bcd8fa8210619f2e14534023a7f93fe8f1c138,PodSandboxId:36cad30ca97d8fe4bf9874a7fee12528dfa08094b07de876ba9d2fd93999d58e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713572777433824081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91975a1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ef31e00ee2c05d749141026c75ec4447f79a380ae00cfbe806380a29e63c58,PodSandboxId:8b198b23f71838e7b14bc8c4e3b718bb1c6d216ffaa5ec219995ea9b4f4c7c7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713572776618832806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotations:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3eebe566cbfae552db6d06e3ce878cd578cace4ac3b8b5be71bf4bd9ff6666a,PodSandboxId:20793aefea29bb8e40db2a4ce691cdd3630bf764651d365aed83ed198fe8e024,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713572762105875303,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68c3bb304e506c6468b1d2cd5dcafae,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:c79aa579b0cae6036a46a7b1aaa75e08bbf35641467073d52465b5f88d81b40d,PodSandboxId:e56329660a79e2c3c8e44ab2dfd633e9f0e3186ad73a4941fd477d411d87249a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713572744236067312,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:9744a7e64b6dfe1c7ddf060cff8b03b0cf3023d30edce9ba4525f223b1cd0b94,PodSandboxId:1b488473663a90a6cc14f775612ba568db9924f2e8ec0e9d52049e5b6da10ce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713572744356481192,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:988e0e16c
e68f3643b9af0d65ea12352ab0639e773ab1352b0a8151eb51f8626,PodSandboxId:0ffa8680c564a324c1dade6f45f502434ec574f667d3abb7075c5524977129a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572744843997907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8657d9bf44d968fb405af2d73a04c2887cf209c19811cc20256b9f4e6230c71a,PodSandboxId:0e08629c20763929cb9013c6a0c063dd3d2ef275020516b2b9618b2e44aaca3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572744831575113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e36bfb114f2bd2d7fc4262b41df0df3a85d79e4c6a533577e909a0e46e0a80,PodSandboxId:3fae8202a1068a4fafc249c58f659b6878754ca5319a2becf15f0a93fff5631f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713572744185281385,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b501a33161b99652e0e199689a5c78dd689f7e56b62656760965fdca22ec9e6f,PodSandboxId:077e956a7dccc6b6b9caf01533ba20b013a217e51a01b45d743b560615453526,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713572744205398446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af96
16002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b8c9a163319f08eb69e441dc04e555623c9a6fef77426e633b17dfe6ca7748,PodSandboxId:8b198b23f71838e7b14bc8c4e3b718bb1c6d216ffaa5ec219995ea9b4f4c7c7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713572744006298165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotation
s:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c8a3e2bb2ea2ed1ecc1b2b0394c0a2f8bd196950bfff76a8d5d6292d348bb,PodSandboxId:9ab77018e92d9ff6ca9244fcd5466f24520859fc0aa3f4bc93eb08f6d0787568,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713572743991218482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annota
tions:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7503e9373d138e5d2e23128934a5da5fd17cde8052cfdc2ccb8ea63ef43b5d37,PodSandboxId:a7ec7ade955ff3362448ed7381df73019003e38d57dc42f24b3e4dffda16cff2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713572742041253686,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernet
es.container.hash: dd367de8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee362bb57c39b48e30473ee01be65a12508f89000c04664e9d4cb00eead48881,PodSandboxId:2952502d79ed7046fb6c936e2cdcaac06d274a1af6bb0f72625bb9c7849a53af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713572256398294941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes
.container.hash: 91975a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf,PodSandboxId:96b6f46faf7987503503c406f518a352cf828470aaa2857fdc4e9580eee7d3ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713572112401908790,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c,PodSandboxId:6951735c94141fbea313e44ff72fab10529f03b1ba6dc664543c35ed8b0e7c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713572112310427901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c,PodSandboxId:78d8eb3f68b710cf8ae3ebc45873b48e07019b5e4d7efd0b56e62a4513be110c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431f
ceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713572108700751072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0,PodSandboxId:6b52fa6b93c1b7e8f8537088635da6d0cb7b5bb9091002379c8f7b848af01e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691
a75a899,State:CONTAINER_EXITED,CreatedAt:1713572087052357246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9,PodSandboxId:6c0d855406f87897ca0924505087fcfdf3cb0d5eaf2fcde6c237b42f6d3ffd82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:17
13572086953871804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2fd95e8a-8841-4272-8656-4d48c80c8867 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.453925023Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=70520fb4-1b52-4592-acd1-fd064b29200f name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.455614501Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:36cad30ca97d8fe4bf9874a7fee12528dfa08094b07de876ba9d2fd93999d58e,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-f8cxz,Uid:c53b85d0-fb09-4f4a-994b-650454a591e9,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713572777203455742,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T00:17:34.744748214Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:20793aefea29bb8e40db2a4ce691cdd3630bf764651d365aed83ed198fe8e024,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-371738,Uid:a68c3bb304e506c6468b1d2cd5dcafae,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1713572761994010317,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68c3bb304e506c6468b1d2cd5dcafae,},Annotations:map[string]string{kubernetes.io/config.hash: a68c3bb304e506c6468b1d2cd5dcafae,kubernetes.io/config.seen: 2024-04-20T00:25:44.511964144Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6176436eca7ffb5fff720d942de5cf0c751e7751f942a46f8b3e9c39211da722,Metadata:&PodSandboxMetadata{Name:kindnet-s87k2,Uid:0820561f-f794-4ac5-8ce2-ae0cb4310c3e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713572743903028370,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string
{kubernetes.io/config.seen: 2024-04-20T00:15:07.358616915Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0ffa8680c564a324c1dade6f45f502434ec574f667d3abb7075c5524977129a3,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hc82,Uid:279d40d8-eb21-476c-ba36-bc7592777126,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713572743604017237,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T00:15:10.320890399Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0e08629c20763929cb9013c6a0c063dd3d2ef275020516b2b9618b2e44aaca3e,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-jvvpr,Uid:104d5328-1f6a-4747-8e26-9a98e38dc1cc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713
572743526406242,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T00:15:10.311523063Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1b488473663a90a6cc14f775612ba568db9924f2e8ec0e9d52049e5b6da10ce8,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1d7b89d3-7cff-4258-8215-819971fa1b81,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713572743517067065,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{kubectl.kubernetes.io/last-appli
ed-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-20T00:15:10.322687696Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8b198b23f71838e7b14bc8c4e3b718bb1c6d216ffaa5ec219995ea9b4f4c7c7c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-371738,Uid:b49388f5cf8c9385067a8ba08572fa8a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Cre
atedAt:1713572743513198042,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.217:8443,kubernetes.io/config.hash: b49388f5cf8c9385067a8ba08572fa8a,kubernetes.io/config.seen: 2024-04-20T00:14:55.955354138Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3fae8202a1068a4fafc249c58f659b6878754ca5319a2becf15f0a93fff5631f,Metadata:&PodSandboxMetadata{Name:etcd-ha-371738,Uid:a7ef9202f47a99f44c4ee1b49d3476fe,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713572743502977319,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d347
6fe,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.217:2379,kubernetes.io/config.hash: a7ef9202f47a99f44c4ee1b49d3476fe,kubernetes.io/config.seen: 2024-04-20T00:14:55.955348033Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e56329660a79e2c3c8e44ab2dfd633e9f0e3186ad73a4941fd477d411d87249a,Metadata:&PodSandboxMetadata{Name:kube-proxy-zw62l,Uid:dad72bfc-65c2-4007-9d5c-682ddf48c44d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713572743500928402,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T00:15:07.358690088Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:077e956a7dccc6b6b9c
af01533ba20b013a217e51a01b45d743b560615453526,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-371738,Uid:4bf0f7783323c0e2283af9616002946f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713572743462033859,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4bf0f7783323c0e2283af9616002946f,kubernetes.io/config.seen: 2024-04-20T00:14:55.955356095Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9ab77018e92d9ff6ca9244fcd5466f24520859fc0aa3f4bc93eb08f6d0787568,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-371738,Uid:76604d8bd3050c15d950e4295eb30cc6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713572743411969224,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.
container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 76604d8bd3050c15d950e4295eb30cc6,kubernetes.io/config.seen: 2024-04-20T00:14:55.955355248Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a7ec7ade955ff3362448ed7381df73019003e38d57dc42f24b3e4dffda16cff2,Metadata:&PodSandboxMetadata{Name:kindnet-s87k2,Uid:0820561f-f794-4ac5-8ce2-ae0cb4310c3e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1713572741675318788,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04
-20T00:15:07.358616915Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2952502d79ed7046fb6c936e2cdcaac06d274a1af6bb0f72625bb9c7849a53af,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-f8cxz,Uid:c53b85d0-fb09-4f4a-994b-650454a591e9,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713572255080184231,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T00:17:34.744748214Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:96b6f46faf7987503503c406f518a352cf828470aaa2857fdc4e9580eee7d3ce,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hc82,Uid:279d40d8-eb21-476c-ba36-bc7592777126,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713572112154188867,Labels:map[string]string{io.k
ubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T00:15:10.320890399Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6951735c94141fbea313e44ff72fab10529f03b1ba6dc664543c35ed8b0e7c9c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-jvvpr,Uid:104d5328-1f6a-4747-8e26-9a98e38dc1cc,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713572112121009459,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T00:15:10.311523063Z,kubernetes.io/config.source: api,},Runt
imeHandler:,},&PodSandbox{Id:78d8eb3f68b710cf8ae3ebc45873b48e07019b5e4d7efd0b56e62a4513be110c,Metadata:&PodSandboxMetadata{Name:kube-proxy-zw62l,Uid:dad72bfc-65c2-4007-9d5c-682ddf48c44d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713572108574832662,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T00:15:07.358690088Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6c0d855406f87897ca0924505087fcfdf3cb0d5eaf2fcde6c237b42f6d3ffd82,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-371738,Uid:4bf0f7783323c0e2283af9616002946f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713572086735595166,Labels:map[string]string{component: kube-scheduler,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4bf0f7783323c0e2283af9616002946f,kubernetes.io/config.seen: 2024-04-20T00:14:46.061944667Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6b52fa6b93c1b7e8f8537088635da6d0cb7b5bb9091002379c8f7b848af01e87,Metadata:&PodSandboxMetadata{Name:etcd-ha-371738,Uid:a7ef9202f47a99f44c4ee1b49d3476fe,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713572086719428119,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d3476fe,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.217:2379,kubernetes.io/config.hash: a7ef9202
f47a99f44c4ee1b49d3476fe,kubernetes.io/config.seen: 2024-04-20T00:14:46.061938307Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=70520fb4-1b52-4592-acd1-fd064b29200f name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.458456173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71d211f8-a0e4-4e4e-98f1-c09365a90431 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.458546357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71d211f8-a0e4-4e4e-98f1-c09365a90431 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.459907179Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac855ed56c2e46b910c6f0e29306cd74994e0b52011d6e705d82d492d9434235,PodSandboxId:1b488473663a90a6cc14f775612ba568db9924f2e8ec0e9d52049e5b6da10ce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713572843021809392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6336035c08e95e415f73490de9a3dab8a520c846415666117bb1e7e4ff497e1d,PodSandboxId:6176436eca7ffb5fff720d942de5cf0c751e7751f942a46f8b3e9c39211da722,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713572787014729451,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernetes.container.hash: dd367de8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919081b91e5bbc00fd283a7cc9a6268f1e14b692b56b061e2b21b046a9580fd9,PodSandboxId:9ab77018e92d9ff6ca9244fcd5466f24520859fc0aa3f4bc93eb08f6d0787568,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713572784013589195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9a7bfac3a904c015a4a564bcd8fa8210619f2e14534023a7f93fe8f1c138,PodSandboxId:36cad30ca97d8fe4bf9874a7fee12528dfa08094b07de876ba9d2fd93999d58e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713572777433824081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91975a1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ef31e00ee2c05d749141026c75ec4447f79a380ae00cfbe806380a29e63c58,PodSandboxId:8b198b23f71838e7b14bc8c4e3b718bb1c6d216ffaa5ec219995ea9b4f4c7c7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713572776618832806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotations:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3eebe566cbfae552db6d06e3ce878cd578cace4ac3b8b5be71bf4bd9ff6666a,PodSandboxId:20793aefea29bb8e40db2a4ce691cdd3630bf764651d365aed83ed198fe8e024,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713572762105875303,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68c3bb304e506c6468b1d2cd5dcafae,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:c79aa579b0cae6036a46a7b1aaa75e08bbf35641467073d52465b5f88d81b40d,PodSandboxId:e56329660a79e2c3c8e44ab2dfd633e9f0e3186ad73a4941fd477d411d87249a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713572744236067312,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:9744a7e64b6dfe1c7ddf060cff8b03b0cf3023d30edce9ba4525f223b1cd0b94,PodSandboxId:1b488473663a90a6cc14f775612ba568db9924f2e8ec0e9d52049e5b6da10ce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713572744356481192,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:988e0e16c
e68f3643b9af0d65ea12352ab0639e773ab1352b0a8151eb51f8626,PodSandboxId:0ffa8680c564a324c1dade6f45f502434ec574f667d3abb7075c5524977129a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572744843997907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8657d9bf44d968fb405af2d73a04c2887cf209c19811cc20256b9f4e6230c71a,PodSandboxId:0e08629c20763929cb9013c6a0c063dd3d2ef275020516b2b9618b2e44aaca3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572744831575113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e36bfb114f2bd2d7fc4262b41df0df3a85d79e4c6a533577e909a0e46e0a80,PodSandboxId:3fae8202a1068a4fafc249c58f659b6878754ca5319a2becf15f0a93fff5631f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713572744185281385,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b501a33161b99652e0e199689a5c78dd689f7e56b62656760965fdca22ec9e6f,PodSandboxId:077e956a7dccc6b6b9caf01533ba20b013a217e51a01b45d743b560615453526,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713572744205398446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af96
16002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b8c9a163319f08eb69e441dc04e555623c9a6fef77426e633b17dfe6ca7748,PodSandboxId:8b198b23f71838e7b14bc8c4e3b718bb1c6d216ffaa5ec219995ea9b4f4c7c7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713572744006298165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotation
s:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c8a3e2bb2ea2ed1ecc1b2b0394c0a2f8bd196950bfff76a8d5d6292d348bb,PodSandboxId:9ab77018e92d9ff6ca9244fcd5466f24520859fc0aa3f4bc93eb08f6d0787568,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713572743991218482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annota
tions:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7503e9373d138e5d2e23128934a5da5fd17cde8052cfdc2ccb8ea63ef43b5d37,PodSandboxId:a7ec7ade955ff3362448ed7381df73019003e38d57dc42f24b3e4dffda16cff2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713572742041253686,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernet
es.container.hash: dd367de8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee362bb57c39b48e30473ee01be65a12508f89000c04664e9d4cb00eead48881,PodSandboxId:2952502d79ed7046fb6c936e2cdcaac06d274a1af6bb0f72625bb9c7849a53af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713572256398294941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes
.container.hash: 91975a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf,PodSandboxId:96b6f46faf7987503503c406f518a352cf828470aaa2857fdc4e9580eee7d3ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713572112401908790,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c,PodSandboxId:6951735c94141fbea313e44ff72fab10529f03b1ba6dc664543c35ed8b0e7c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713572112310427901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c,PodSandboxId:78d8eb3f68b710cf8ae3ebc45873b48e07019b5e4d7efd0b56e62a4513be110c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431f
ceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713572108700751072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0,PodSandboxId:6b52fa6b93c1b7e8f8537088635da6d0cb7b5bb9091002379c8f7b848af01e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691
a75a899,State:CONTAINER_EXITED,CreatedAt:1713572087052357246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9,PodSandboxId:6c0d855406f87897ca0924505087fcfdf3cb0d5eaf2fcde6c237b42f6d3ffd82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:17
13572086953871804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=71d211f8-a0e4-4e4e-98f1-c09365a90431 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.517380095Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70b363d8-37cf-4997-94f9-f0026c290c29 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.517459974Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70b363d8-37cf-4997-94f9-f0026c290c29 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.518843557Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d47a1f1c-f3f9-4603-94f0-5c74e60176f7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.519656336Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713572890519627224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d47a1f1c-f3f9-4603-94f0-5c74e60176f7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.520648233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3343792-dad9-4645-8dd2-45fd8d70d409 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.520709002Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3343792-dad9-4645-8dd2-45fd8d70d409 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:28:10 ha-371738 crio[3972]: time="2024-04-20 00:28:10.521289620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac855ed56c2e46b910c6f0e29306cd74994e0b52011d6e705d82d492d9434235,PodSandboxId:1b488473663a90a6cc14f775612ba568db9924f2e8ec0e9d52049e5b6da10ce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713572843021809392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6336035c08e95e415f73490de9a3dab8a520c846415666117bb1e7e4ff497e1d,PodSandboxId:6176436eca7ffb5fff720d942de5cf0c751e7751f942a46f8b3e9c39211da722,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713572787014729451,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernetes.container.hash: dd367de8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919081b91e5bbc00fd283a7cc9a6268f1e14b692b56b061e2b21b046a9580fd9,PodSandboxId:9ab77018e92d9ff6ca9244fcd5466f24520859fc0aa3f4bc93eb08f6d0787568,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713572784013589195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9a7bfac3a904c015a4a564bcd8fa8210619f2e14534023a7f93fe8f1c138,PodSandboxId:36cad30ca97d8fe4bf9874a7fee12528dfa08094b07de876ba9d2fd93999d58e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713572777433824081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91975a1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ef31e00ee2c05d749141026c75ec4447f79a380ae00cfbe806380a29e63c58,PodSandboxId:8b198b23f71838e7b14bc8c4e3b718bb1c6d216ffaa5ec219995ea9b4f4c7c7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713572776618832806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotations:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3eebe566cbfae552db6d06e3ce878cd578cace4ac3b8b5be71bf4bd9ff6666a,PodSandboxId:20793aefea29bb8e40db2a4ce691cdd3630bf764651d365aed83ed198fe8e024,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713572762105875303,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68c3bb304e506c6468b1d2cd5dcafae,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:c79aa579b0cae6036a46a7b1aaa75e08bbf35641467073d52465b5f88d81b40d,PodSandboxId:e56329660a79e2c3c8e44ab2dfd633e9f0e3186ad73a4941fd477d411d87249a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713572744236067312,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:9744a7e64b6dfe1c7ddf060cff8b03b0cf3023d30edce9ba4525f223b1cd0b94,PodSandboxId:1b488473663a90a6cc14f775612ba568db9924f2e8ec0e9d52049e5b6da10ce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713572744356481192,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:988e0e16c
e68f3643b9af0d65ea12352ab0639e773ab1352b0a8151eb51f8626,PodSandboxId:0ffa8680c564a324c1dade6f45f502434ec574f667d3abb7075c5524977129a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572744843997907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8657d9bf44d968fb405af2d73a04c2887cf209c19811cc20256b9f4e6230c71a,PodSandboxId:0e08629c20763929cb9013c6a0c063dd3d2ef275020516b2b9618b2e44aaca3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572744831575113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e36bfb114f2bd2d7fc4262b41df0df3a85d79e4c6a533577e909a0e46e0a80,PodSandboxId:3fae8202a1068a4fafc249c58f659b6878754ca5319a2becf15f0a93fff5631f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713572744185281385,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b501a33161b99652e0e199689a5c78dd689f7e56b62656760965fdca22ec9e6f,PodSandboxId:077e956a7dccc6b6b9caf01533ba20b013a217e51a01b45d743b560615453526,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713572744205398446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af96
16002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b8c9a163319f08eb69e441dc04e555623c9a6fef77426e633b17dfe6ca7748,PodSandboxId:8b198b23f71838e7b14bc8c4e3b718bb1c6d216ffaa5ec219995ea9b4f4c7c7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713572744006298165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotation
s:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c8a3e2bb2ea2ed1ecc1b2b0394c0a2f8bd196950bfff76a8d5d6292d348bb,PodSandboxId:9ab77018e92d9ff6ca9244fcd5466f24520859fc0aa3f4bc93eb08f6d0787568,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713572743991218482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annota
tions:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7503e9373d138e5d2e23128934a5da5fd17cde8052cfdc2ccb8ea63ef43b5d37,PodSandboxId:a7ec7ade955ff3362448ed7381df73019003e38d57dc42f24b3e4dffda16cff2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713572742041253686,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernet
es.container.hash: dd367de8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee362bb57c39b48e30473ee01be65a12508f89000c04664e9d4cb00eead48881,PodSandboxId:2952502d79ed7046fb6c936e2cdcaac06d274a1af6bb0f72625bb9c7849a53af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713572256398294941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes
.container.hash: 91975a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf,PodSandboxId:96b6f46faf7987503503c406f518a352cf828470aaa2857fdc4e9580eee7d3ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713572112401908790,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c,PodSandboxId:6951735c94141fbea313e44ff72fab10529f03b1ba6dc664543c35ed8b0e7c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713572112310427901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c,PodSandboxId:78d8eb3f68b710cf8ae3ebc45873b48e07019b5e4d7efd0b56e62a4513be110c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431f
ceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713572108700751072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0,PodSandboxId:6b52fa6b93c1b7e8f8537088635da6d0cb7b5bb9091002379c8f7b848af01e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691
a75a899,State:CONTAINER_EXITED,CreatedAt:1713572087052357246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9,PodSandboxId:6c0d855406f87897ca0924505087fcfdf3cb0d5eaf2fcde6c237b42f6d3ffd82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:17
13572086953871804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3343792-dad9-4645-8dd2-45fd8d70d409 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ac855ed56c2e4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      47 seconds ago       Running             storage-provisioner       5                   1b488473663a9       storage-provisioner
	6336035c08e95       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   6176436eca7ff       kindnet-s87k2
	919081b91e5bb       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Running             kube-controller-manager   2                   9ab77018e92d9       kube-controller-manager-ha-371738
	853b9a7bfac3a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   36cad30ca97d8       busybox-fc5497c4f-f8cxz
	c9ef31e00ee2c       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            3                   8b198b23f7183       kube-apiserver-ha-371738
	d3eebe566cbfa       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  0                   20793aefea29b       kube-vip-ha-371738
	988e0e16ce68f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   0ffa8680c564a       coredns-7db6d8ff4d-9hc82
	8657d9bf44d96       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   0e08629c20763       coredns-7db6d8ff4d-jvvpr
	9744a7e64b6df       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       4                   1b488473663a9       storage-provisioner
	c79aa579b0cae       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      2 minutes ago        Running             kube-proxy                1                   e56329660a79e       kube-proxy-zw62l
	b501a33161b99       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      2 minutes ago        Running             kube-scheduler            1                   077e956a7dccc       kube-scheduler-ha-371738
	14e36bfb114f2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   3fae8202a1068       etcd-ha-371738
	97b8c9a163319       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      2 minutes ago        Exited              kube-apiserver            2                   8b198b23f7183       kube-apiserver-ha-371738
	323c8a3e2bb2e       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      2 minutes ago        Exited              kube-controller-manager   1                   9ab77018e92d9       kube-controller-manager-ha-371738
	7503e9373d138       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   a7ec7ade955ff       kindnet-s87k2
	ee362bb57c39b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   2952502d79ed7       busybox-fc5497c4f-f8cxz
	0895fff8b18b0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   96b6f46faf798       coredns-7db6d8ff4d-9hc82
	a8223d8428849       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   6951735c94141       coredns-7db6d8ff4d-jvvpr
	484faebf3e657       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      13 minutes ago       Exited              kube-proxy                0                   78d8eb3f68b71       kube-proxy-zw62l
	c7bfd34cee24c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   6b52fa6b93c1b       etcd-ha-371738
	c9112b9048168       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      13 minutes ago       Exited              kube-scheduler            0                   6c0d855406f87       kube-scheduler-ha-371738
	
	
	==> coredns [0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf] <==
	[INFO] 10.244.0.4:60826 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142804s
	[INFO] 10.244.1.2:55654 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142714s
	[INFO] 10.244.1.2:34889 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117318s
	[INFO] 10.244.1.2:45674 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142204s
	[INFO] 10.244.1.2:43577 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088578s
	[INFO] 10.244.1.2:36740 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123852s
	[INFO] 10.244.1.2:57454 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168492s
	[INFO] 10.244.2.2:49398 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205465s
	[INFO] 10.244.2.2:48930 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221231s
	[INFO] 10.244.2.2:42052 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108139s
	[INFO] 10.244.0.4:40360 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000213257s
	[INFO] 10.244.0.4:54447 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081534s
	[INFO] 10.244.1.2:40715 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185061s
	[INFO] 10.244.1.2:45537 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165941s
	[INFO] 10.244.1.2:38158 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132179s
	[INFO] 10.244.2.2:42970 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000371127s
	[INFO] 10.244.2.2:50230 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000172364s
	[INFO] 10.244.0.4:51459 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000058901s
	[INFO] 10.244.0.4:59988 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131476s
	[INFO] 10.244.1.2:56359 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140553s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8657d9bf44d968fb405af2d73a04c2887cf209c19811cc20256b9f4e6230c71a] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44488->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1521942851]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 00:25:57.193) (total time: 10165ms):
	Trace[1521942851]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44486->10.96.0.1:443: read: connection reset by peer 10164ms (00:26:07.358)
	Trace[1521942851]: [10.165002909s] [10.165002909s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44488->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44486->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[591503224]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 00:26:17.833) (total time: 10002ms):
	Trace[591503224]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (00:26:27.835)
	Trace[591503224]: [10.002384233s] [10.002384233s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [988e0e16ce68f3643b9af0d65ea12352ab0639e773ab1352b0a8151eb51f8626] <==
	[INFO] plugin/kubernetes: Trace[207359994]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 00:25:50.093) (total time: 10001ms):
	Trace[207359994]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:26:00.095)
	Trace[207359994]: [10.001948631s] [10.001948631s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1794704336]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 00:25:54.166) (total time: 10001ms):
	Trace[1794704336]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:26:04.168)
	Trace[1794704336]: [10.001569909s] [10.001569909s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:34696->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:34696->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34682->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34682->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c] <==
	[INFO] 10.244.2.2:59691 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134121s
	[INFO] 10.244.0.4:54126 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001623235s
	[INFO] 10.244.0.4:42647 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000295581s
	[INFO] 10.244.0.4:47843 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001168377s
	[INFO] 10.244.0.4:59380 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132829s
	[INFO] 10.244.0.4:59464 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000032892s
	[INFO] 10.244.0.4:52319 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063642s
	[INFO] 10.244.1.2:41188 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001744808s
	[INFO] 10.244.1.2:56595 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001214481s
	[INFO] 10.244.2.2:57639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000180873s
	[INFO] 10.244.0.4:57748 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177324s
	[INFO] 10.244.0.4:49496 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076032s
	[INFO] 10.244.1.2:36655 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131976s
	[INFO] 10.244.2.2:37462 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221492s
	[INFO] 10.244.2.2:58605 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000186595s
	[INFO] 10.244.0.4:34556 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191452s
	[INFO] 10.244.0.4:53073 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000345299s
	[INFO] 10.244.1.2:38241 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000181093s
	[INFO] 10.244.1.2:59304 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000166312s
	[INFO] 10.244.1.2:50151 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139637s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1838&timeout=6m43s&timeoutSeconds=403&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1838&timeout=5m9s&timeoutSeconds=309&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1838&timeout=5m17s&timeoutSeconds=317&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-371738
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-371738
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-371738
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T00_14_57_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:14:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-371738
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:28:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:26:33 +0000   Sat, 20 Apr 2024 00:14:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:26:33 +0000   Sat, 20 Apr 2024 00:14:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:26:33 +0000   Sat, 20 Apr 2024 00:14:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:26:33 +0000   Sat, 20 Apr 2024 00:15:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-371738
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 74609fff13e94a48ba74bd0fc50a4818
	  System UUID:                74609fff-13e9-4a48-ba74-bd0fc50a4818
	  Boot ID:                    2adb72ca-aae0-452d-9d86-779c19923b8a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-f8cxz              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-9hc82             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-jvvpr             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-371738                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-s87k2                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-371738             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-371738    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-zw62l                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-371738             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-371738                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 101s   kube-proxy       
	  Normal   Starting                 13m    kube-proxy       
	  Normal   Starting                 13m    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    13m    kubelet          Node ha-371738 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  13m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m    kubelet          Node ha-371738 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m    kubelet          Node ha-371738 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m    node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	  Normal   NodeReady                13m    kubelet          Node ha-371738 status is now: NodeReady
	  Normal   RegisteredNode           11m    node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	  Normal   RegisteredNode           10m    node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	  Warning  ContainerGCFailed        3m15s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           89s    node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	  Normal   RegisteredNode           88s    node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	  Normal   RegisteredNode           35s    node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	
	
	Name:               ha-371738-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-371738-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-371738
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_20T00_16_02_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:15:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-371738-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:28:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:27:22 +0000   Sat, 20 Apr 2024 00:26:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:27:22 +0000   Sat, 20 Apr 2024 00:26:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:27:22 +0000   Sat, 20 Apr 2024 00:26:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:27:22 +0000   Sat, 20 Apr 2024 00:26:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-371738-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e23e7a13fe24abd8986bea706ca80e3
	  System UUID:                4e23e7a1-3fe2-4abd-8986-bea706ca80e3
	  Boot ID:                    bc9fbe65-d0b4-4673-b35c-703e2f7e1f06
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-j7g5h                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-371738-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-ggw7f                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-371738-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-371738-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-59wls                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-371738-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-371738-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 92s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-371738-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-371738-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-371738-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	  Normal  NodeNotReady             8m46s                node-controller  Node ha-371738-m02 status is now: NodeNotReady
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node ha-371738-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node ha-371738-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x7 over 2m1s)  kubelet          Node ha-371738-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           89s                  node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	  Normal  RegisteredNode           88s                  node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	  Normal  RegisteredNode           35s                  node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	
	
	Name:               ha-371738-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-371738-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-371738
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_20T00_17_14_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:17:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-371738-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:28:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:27:40 +0000   Sat, 20 Apr 2024 00:17:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:27:40 +0000   Sat, 20 Apr 2024 00:17:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:27:40 +0000   Sat, 20 Apr 2024 00:17:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:27:40 +0000   Sat, 20 Apr 2024 00:17:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-371738-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0917a4381b82461ea5ea3ad6015706e2
	  System UUID:                0917a438-1b82-461e-a5ea-3ad6015706e2
	  Boot ID:                    c34a3ac8-acf5-49cc-a101-9ba629262803
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bqndp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-371738-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-ph4sb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-371738-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-371738-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-924z9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-371738-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-371738-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 44s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-371738-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-371738-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-371738-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-371738-m03 event: Registered Node ha-371738-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-371738-m03 event: Registered Node ha-371738-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-371738-m03 event: Registered Node ha-371738-m03 in Controller
	  Normal   RegisteredNode           89s                node-controller  Node ha-371738-m03 event: Registered Node ha-371738-m03 in Controller
	  Normal   RegisteredNode           88s                node-controller  Node ha-371738-m03 event: Registered Node ha-371738-m03 in Controller
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  62s                kubelet          Node ha-371738-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s                kubelet          Node ha-371738-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s                kubelet          Node ha-371738-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 62s                kubelet          Node ha-371738-m03 has been rebooted, boot id: c34a3ac8-acf5-49cc-a101-9ba629262803
	  Normal   RegisteredNode           35s                node-controller  Node ha-371738-m03 event: Registered Node ha-371738-m03 in Controller
	
	
	Name:               ha-371738-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-371738-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-371738
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_20T00_18_15_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:18:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-371738-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:28:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:28:02 +0000   Sat, 20 Apr 2024 00:28:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:28:02 +0000   Sat, 20 Apr 2024 00:28:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:28:02 +0000   Sat, 20 Apr 2024 00:28:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:28:02 +0000   Sat, 20 Apr 2024 00:28:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    ha-371738-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 236dccfc477f4e3db2ca80077dc2160d
	  System UUID:                236dccfc-477f-4e3d-b2ca-80077dc2160d
	  Boot ID:                    68dd9360-905d-4c49-b8f5-3ad8f692d4cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-zsn9n       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m57s
	  kube-system                 kube-proxy-7fn2b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   Starting                 9m52s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m57s (x4 over 9m58s)  kubelet          Node ha-371738-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m57s (x4 over 9m58s)  kubelet          Node ha-371738-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m57s (x4 over 9m58s)  kubelet          Node ha-371738-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m56s                  node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal   RegisteredNode           9m55s                  node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal   RegisteredNode           9m53s                  node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal   NodeReady                9m49s                  kubelet          Node ha-371738-m04 status is now: NodeReady
	  Normal   RegisteredNode           89s                    node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal   RegisteredNode           88s                    node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal   NodeNotReady             49s                    node-controller  Node ha-371738-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           35s                    node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal   Starting                 9s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)        kubelet          Node ha-371738-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)        kubelet          Node ha-371738-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)        kubelet          Node ha-371738-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s (x2 over 9s)        kubelet          Node ha-371738-m04 has been rebooted, boot id: 68dd9360-905d-4c49-b8f5-3ad8f692d4cb
	  Normal   NodeReady                9s (x2 over 9s)        kubelet          Node ha-371738-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.470452] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.056643] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066813] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.173842] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.129751] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.277871] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.788058] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.061136] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.194311] systemd-fstab-generator[953]: Ignoring "noauto" option for root device
	[  +1.186377] kauditd_printk_skb: 57 callbacks suppressed
	[  +8.916528] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[  +0.094324] kauditd_printk_skb: 40 callbacks suppressed
	[Apr20 00:15] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.572775] kauditd_printk_skb: 72 callbacks suppressed
	[Apr20 00:22] kauditd_printk_skb: 1 callbacks suppressed
	[Apr20 00:25] systemd-fstab-generator[3800]: Ignoring "noauto" option for root device
	[  +0.218476] systemd-fstab-generator[3830]: Ignoring "noauto" option for root device
	[  +0.227584] systemd-fstab-generator[3864]: Ignoring "noauto" option for root device
	[  +0.183646] systemd-fstab-generator[3886]: Ignoring "noauto" option for root device
	[  +0.319469] systemd-fstab-generator[3936]: Ignoring "noauto" option for root device
	[  +2.059096] systemd-fstab-generator[4602]: Ignoring "noauto" option for root device
	[  +3.382447] kauditd_printk_skb: 231 callbacks suppressed
	[Apr20 00:26] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [14e36bfb114f2bd2d7fc4262b41df0df3a85d79e4c6a533577e909a0e46e0a80] <==
	{"level":"warn","ts":"2024-04-20T00:27:03.533713Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:27:03.539662Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T00:27:03.622254Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.253:2380/version","remote-member-id":"119fe9e65aa8addc","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-20T00:27:03.622332Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"119fe9e65aa8addc","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-20T00:27:06.073236Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"119fe9e65aa8addc","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-20T00:27:06.073506Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"119fe9e65aa8addc","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-20T00:27:07.625191Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.253:2380/version","remote-member-id":"119fe9e65aa8addc","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-20T00:27:07.625326Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"119fe9e65aa8addc","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-20T00:27:11.074485Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"119fe9e65aa8addc","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-20T00:27:11.074588Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"119fe9e65aa8addc","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-20T00:27:11.628249Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.253:2380/version","remote-member-id":"119fe9e65aa8addc","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-20T00:27:11.628324Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"119fe9e65aa8addc","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-20T00:27:15.630434Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.253:2380/version","remote-member-id":"119fe9e65aa8addc","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-20T00:27:15.63052Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"119fe9e65aa8addc","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-20T00:27:16.0747Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"119fe9e65aa8addc","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-20T00:27:16.074731Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"119fe9e65aa8addc","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-20T00:27:18.017665Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:27:18.020243Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:27:18.03757Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a09c9983ac28f1fd","to":"119fe9e65aa8addc","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-20T00:27:18.037646Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:27:18.039987Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:27:18.045431Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a09c9983ac28f1fd","to":"119fe9e65aa8addc","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-20T00:27:18.04554Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"warn","ts":"2024-04-20T00:28:05.966985Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.981442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-59wls\" ","response":"range_response_count:1 size:4592"}
	{"level":"info","ts":"2024-04-20T00:28:05.967396Z","caller":"traceutil/trace.go:171","msg":"trace[2058211705] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-59wls; range_end:; response_count:1; response_revision:2462; }","duration":"101.449447ms","start":"2024-04-20T00:28:05.865909Z","end":"2024-04-20T00:28:05.967359Z","steps":["trace[2058211705] 'range keys from in-memory index tree'  (duration: 99.410969ms)"],"step_count":1}
	
	
	==> etcd [c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0] <==
	2024/04/20 00:24:10 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-20T00:24:10.169977Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:24:09.294363Z","time spent":"875.609388ms","remote":"127.0.0.1:46986","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":0,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:10000 "}
	2024/04/20 00:24:10 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-20T00:24:10.170024Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:24:09.184491Z","time spent":"985.528793ms","remote":"127.0.0.1:46854","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:500 "}
	2024/04/20 00:24:10 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-20T00:24:10.170068Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:24:09.281831Z","time spent":"888.233349ms","remote":"127.0.0.1:46588","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:10000 "}
	2024/04/20 00:24:10 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-20T00:24:10.191324Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a09c9983ac28f1fd","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-20T00:24:10.191932Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:24:10.191977Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:24:10.192053Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:24:10.192294Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:24:10.19237Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:24:10.192407Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:24:10.192417Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:24:10.192423Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"bced3148e0d07545"}
	{"level":"info","ts":"2024-04-20T00:24:10.192436Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"bced3148e0d07545"}
	{"level":"info","ts":"2024-04-20T00:24:10.192484Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"bced3148e0d07545"}
	{"level":"info","ts":"2024-04-20T00:24:10.192523Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545"}
	{"level":"info","ts":"2024-04-20T00:24:10.192576Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545"}
	{"level":"info","ts":"2024-04-20T00:24:10.19266Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545"}
	{"level":"info","ts":"2024-04-20T00:24:10.192671Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"bced3148e0d07545"}
	{"level":"info","ts":"2024-04-20T00:24:10.196056Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-04-20T00:24:10.196281Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-04-20T00:24:10.196317Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-371738","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	
	
	==> kernel <==
	 00:28:11 up 13 min,  0 users,  load average: 1.21, 0.78, 0.41
	Linux ha-371738 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6336035c08e95e415f73490de9a3dab8a520c846415666117bb1e7e4ff497e1d] <==
	I0420 00:27:39.691233       1 main.go:250] Node ha-371738-m04 has CIDR [10.244.3.0/24] 
	I0420 00:27:49.703727       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0420 00:27:49.703868       1 main.go:227] handling current node
	I0420 00:27:49.703892       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0420 00:27:49.703911       1 main.go:250] Node ha-371738-m02 has CIDR [10.244.1.0/24] 
	I0420 00:27:49.704215       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0420 00:27:49.704261       1 main.go:250] Node ha-371738-m03 has CIDR [10.244.2.0/24] 
	I0420 00:27:49.704350       1 main.go:223] Handling node with IPs: map[192.168.39.61:{}]
	I0420 00:27:49.704380       1 main.go:250] Node ha-371738-m04 has CIDR [10.244.3.0/24] 
	I0420 00:27:59.710744       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0420 00:27:59.710811       1 main.go:227] handling current node
	I0420 00:27:59.710835       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0420 00:27:59.710841       1 main.go:250] Node ha-371738-m02 has CIDR [10.244.1.0/24] 
	I0420 00:27:59.710985       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0420 00:27:59.711033       1 main.go:250] Node ha-371738-m03 has CIDR [10.244.2.0/24] 
	I0420 00:27:59.711228       1 main.go:223] Handling node with IPs: map[192.168.39.61:{}]
	I0420 00:27:59.711270       1 main.go:250] Node ha-371738-m04 has CIDR [10.244.3.0/24] 
	I0420 00:28:09.721035       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0420 00:28:09.721335       1 main.go:227] handling current node
	I0420 00:28:09.721401       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0420 00:28:09.721434       1 main.go:250] Node ha-371738-m02 has CIDR [10.244.1.0/24] 
	I0420 00:28:09.721606       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0420 00:28:09.721752       1 main.go:250] Node ha-371738-m03 has CIDR [10.244.2.0/24] 
	I0420 00:28:09.721851       1 main.go:223] Handling node with IPs: map[192.168.39.61:{}]
	I0420 00:28:09.721879       1 main.go:250] Node ha-371738-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [7503e9373d138e5d2e23128934a5da5fd17cde8052cfdc2ccb8ea63ef43b5d37] <==
	I0420 00:25:42.498821       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0420 00:25:42.498869       1 main.go:107] hostIP = 192.168.39.217
	podIP = 192.168.39.217
	I0420 00:25:42.499040       1 main.go:116] setting mtu 1500 for CNI 
	I0420 00:25:42.499058       1 main.go:146] kindnetd IP family: "ipv4"
	I0420 00:25:42.499080       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	
	
	==> kube-apiserver [97b8c9a163319f08eb69e441dc04e555623c9a6fef77426e633b17dfe6ca7748] <==
	I0420 00:25:44.967586       1 options.go:221] external host was not specified, using 192.168.39.217
	I0420 00:25:44.978354       1 server.go:148] Version: v1.30.0
	I0420 00:25:44.978421       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:25:46.259612       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0420 00:25:46.262747       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0420 00:25:46.266481       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0420 00:25:46.266576       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0420 00:25:46.266751       1 instance.go:299] Using reconciler: lease
	W0420 00:26:06.257471       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0420 00:26:06.257471       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0420 00:26:06.267793       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [c9ef31e00ee2c05d749141026c75ec4447f79a380ae00cfbe806380a29e63c58] <==
	I0420 00:26:29.459391       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0420 00:26:29.459431       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0420 00:26:29.494913       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0420 00:26:29.496394       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0420 00:26:29.502060       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0420 00:26:29.502193       1 policy_source.go:224] refreshing policies
	I0420 00:26:29.502848       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0420 00:26:29.506158       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0420 00:26:29.521605       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0420 00:26:29.566924       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0420 00:26:29.567000       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0420 00:26:29.567011       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0420 00:26:29.569719       1 shared_informer.go:320] Caches are synced for configmaps
	I0420 00:26:29.576271       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0420 00:26:29.576375       1 aggregator.go:165] initial CRD sync complete...
	I0420 00:26:29.576425       1 autoregister_controller.go:141] Starting autoregister controller
	I0420 00:26:29.576450       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0420 00:26:29.576478       1 cache.go:39] Caches are synced for autoregister controller
	W0420 00:26:29.759980       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.253]
	I0420 00:26:29.761834       1 controller.go:615] quota admission added evaluator for: endpoints
	I0420 00:26:29.776442       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0420 00:26:29.786723       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0420 00:26:30.376843       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0420 00:26:30.941671       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.253]
	W0420 00:26:50.916417       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.48]
	
	
	==> kube-controller-manager [323c8a3e2bb2ea2ed1ecc1b2b0394c0a2f8bd196950bfff76a8d5d6292d348bb] <==
	I0420 00:25:46.397204       1 serving.go:380] Generated self-signed cert in-memory
	I0420 00:25:47.085496       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0420 00:25:47.085547       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:25:47.087799       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0420 00:25:47.089314       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0420 00:25:47.090991       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0420 00:25:47.091744       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0420 00:26:07.274932       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.217:8443/healthz\": dial tcp 192.168.39.217:8443: connect: connection refused"
	
	
	==> kube-controller-manager [919081b91e5bbc00fd283a7cc9a6268f1e14b692b56b061e2b21b046a9580fd9] <==
	I0420 00:26:42.118365       1 shared_informer.go:320] Caches are synced for job
	I0420 00:26:42.118505       1 shared_informer.go:320] Caches are synced for attach detach
	I0420 00:26:42.120463       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0420 00:26:42.138419       1 shared_informer.go:320] Caches are synced for resource quota
	I0420 00:26:42.152763       1 shared_informer.go:320] Caches are synced for resource quota
	I0420 00:26:42.167910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.215167ms"
	I0420 00:26:42.169481       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="125.894µs"
	I0420 00:26:42.176570       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.000637ms"
	I0420 00:26:42.192524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.610082ms"
	I0420 00:26:42.192701       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.912µs"
	I0420 00:26:42.294329       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-371738"
	I0420 00:26:42.294433       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-371738-m02"
	I0420 00:26:42.294494       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-371738-m03"
	I0420 00:26:42.294572       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-371738-m04"
	I0420 00:26:42.297239       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0420 00:26:42.562752       1 shared_informer.go:320] Caches are synced for garbage collector
	I0420 00:26:42.562772       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0420 00:26:42.567778       1 shared_informer.go:320] Caches are synced for garbage collector
	I0420 00:26:52.164454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.018119ms"
	I0420 00:26:52.165274       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="366.285µs"
	I0420 00:27:10.399496       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.511937ms"
	I0420 00:27:10.399623       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.242µs"
	I0420 00:27:27.771873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.928223ms"
	I0420 00:27:27.773353       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="122.332µs"
	I0420 00:28:02.600798       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-371738-m04"
	
	
	==> kube-proxy [484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c] <==
	E0420 00:23:00.287313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:03.357409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:03.357476       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:03.357414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:03.357508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:03.357667       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:03.357747       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:09.950893       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:09.951014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:09.951052       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:09.951175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:09.951301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:09.951349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:19.167078       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:19.167264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:22.238483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:22.238802       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:25.309826       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:25.309996       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:40.670248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:40.670430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:40.670552       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:40.670590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:46.817557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:46.817621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [c79aa579b0cae6036a46a7b1aaa75e08bbf35641467073d52465b5f88d81b40d] <==
	E0420 00:26:11.199246       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-371738\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0420 00:26:29.630022       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-371738\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0420 00:26:29.630201       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0420 00:26:29.692931       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 00:26:29.693051       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 00:26:29.693076       1 server_linux.go:165] "Using iptables Proxier"
	I0420 00:26:29.697864       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 00:26:29.698290       1 server.go:872] "Version info" version="v1.30.0"
	I0420 00:26:29.698378       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:26:29.700198       1 config.go:192] "Starting service config controller"
	I0420 00:26:29.700255       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 00:26:29.700287       1 config.go:101] "Starting endpoint slice config controller"
	I0420 00:26:29.700291       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 00:26:29.701288       1 config.go:319] "Starting node config controller"
	I0420 00:26:29.701328       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0420 00:26:32.702047       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0420 00:26:32.702418       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:26:32.702615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:26:32.702771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:26:32.703073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:26:32.703309       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:26:32.703928       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0420 00:26:34.101223       1 shared_informer.go:320] Caches are synced for service config
	I0420 00:26:34.201351       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 00:26:34.201387       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b501a33161b99652e0e199689a5c78dd689f7e56b62656760965fdca22ec9e6f] <==
	I0420 00:26:27.047464       1 trace.go:236] Trace[43179159]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Apr-2024 00:26:17.046) (total time: 10000ms):
	Trace[43179159]: ---"Objects listed" error:Get "https://192.168.39.217:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (00:26:27.047)
	Trace[43179159]: [10.000871787s] [10.000871787s] END
	E0420 00:26:27.047486       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.217:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	W0420 00:26:29.411683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0420 00:26:29.414213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0420 00:26:29.414407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0420 00:26:29.414450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0420 00:26:29.414555       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 00:26:29.414594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0420 00:26:29.414690       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 00:26:29.414721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 00:26:29.414800       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0420 00:26:29.414830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0420 00:26:29.414890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0420 00:26:29.414916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0420 00:26:29.414994       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0420 00:26:29.415023       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0420 00:26:29.415204       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 00:26:29.415248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 00:26:29.415330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0420 00:26:29.415371       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0420 00:26:29.415434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0420 00:26:29.415463       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0420 00:26:29.581268       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9] <==
	W0420 00:24:07.927764       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0420 00:24:07.927871       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0420 00:24:08.270381       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 00:24:08.270539       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 00:24:08.364355       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0420 00:24:08.364441       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0420 00:24:08.811724       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 00:24:08.811918       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 00:24:09.040321       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 00:24:09.040381       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 00:24:09.172716       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0420 00:24:09.173228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0420 00:24:09.344711       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0420 00:24:09.344741       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0420 00:24:09.404484       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0420 00:24:09.404540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0420 00:24:09.445000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0420 00:24:09.445144       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0420 00:24:09.570769       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0420 00:24:09.570828       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0420 00:24:09.707809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0420 00:24:09.707862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0420 00:24:09.737375       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 00:24:09.737434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 00:24:10.126797       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 20 00:26:29 ha-371738 kubelet[1375]: I0420 00:26:29.632917    1375 status_manager.go:853] "Failed to get status for pod" podUID="a68c3bb304e506c6468b1d2cd5dcafae" pod="kube-system/kube-vip-ha-371738" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-371738\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Apr 20 00:26:29 ha-371738 kubelet[1375]: W0420 00:26:29.633946    1375 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	Apr 20 00:26:29 ha-371738 kubelet[1375]: E0420 00:26:29.634210    1375 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	Apr 20 00:26:32 ha-371738 kubelet[1375]: I0420 00:26:32.701981    1375 status_manager.go:853] "Failed to get status for pod" podUID="4bf0f7783323c0e2283af9616002946f" pod="kube-system/kube-scheduler-ha-371738" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-371738\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Apr 20 00:26:39 ha-371738 kubelet[1375]: I0420 00:26:39.999325    1375 scope.go:117] "RemoveContainer" containerID="9744a7e64b6dfe1c7ddf060cff8b03b0cf3023d30edce9ba4525f223b1cd0b94"
	Apr 20 00:26:40 ha-371738 kubelet[1375]: E0420 00:26:40.003544    1375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1d7b89d3-7cff-4258-8215-819971fa1b81)\"" pod="kube-system/storage-provisioner" podUID="1d7b89d3-7cff-4258-8215-819971fa1b81"
	Apr 20 00:26:47 ha-371738 kubelet[1375]: I0420 00:26:47.607374    1375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-f8cxz" podStartSLOduration=552.648829026 podStartE2EDuration="9m13.607330497s" podCreationTimestamp="2024-04-20 00:17:34 +0000 UTC" firstStartedPulling="2024-04-20 00:17:35.423742098 +0000 UTC m=+159.561587053" lastFinishedPulling="2024-04-20 00:17:36.382243565 +0000 UTC m=+160.520088524" observedRunningTime="2024-04-20 00:17:36.80525605 +0000 UTC m=+160.943101021" watchObservedRunningTime="2024-04-20 00:26:47.607330497 +0000 UTC m=+711.745175477"
	Apr 20 00:26:54 ha-371738 kubelet[1375]: I0420 00:26:54.998487    1375 scope.go:117] "RemoveContainer" containerID="9744a7e64b6dfe1c7ddf060cff8b03b0cf3023d30edce9ba4525f223b1cd0b94"
	Apr 20 00:26:54 ha-371738 kubelet[1375]: E0420 00:26:54.999043    1375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1d7b89d3-7cff-4258-8215-819971fa1b81)\"" pod="kube-system/storage-provisioner" podUID="1d7b89d3-7cff-4258-8215-819971fa1b81"
	Apr 20 00:26:56 ha-371738 kubelet[1375]: E0420 00:26:56.015721    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:26:56 ha-371738 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:26:56 ha-371738 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:26:56 ha-371738 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:26:56 ha-371738 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:27:07 ha-371738 kubelet[1375]: I0420 00:27:07.998679    1375 scope.go:117] "RemoveContainer" containerID="9744a7e64b6dfe1c7ddf060cff8b03b0cf3023d30edce9ba4525f223b1cd0b94"
	Apr 20 00:27:07 ha-371738 kubelet[1375]: E0420 00:27:07.999047    1375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1d7b89d3-7cff-4258-8215-819971fa1b81)\"" pod="kube-system/storage-provisioner" podUID="1d7b89d3-7cff-4258-8215-819971fa1b81"
	Apr 20 00:27:22 ha-371738 kubelet[1375]: I0420 00:27:22.997574    1375 scope.go:117] "RemoveContainer" containerID="9744a7e64b6dfe1c7ddf060cff8b03b0cf3023d30edce9ba4525f223b1cd0b94"
	Apr 20 00:27:31 ha-371738 kubelet[1375]: I0420 00:27:31.997991    1375 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-371738" podUID="8d162382-25bb-4393-8c45-a8487b571605"
	Apr 20 00:27:32 ha-371738 kubelet[1375]: I0420 00:27:32.023654    1375 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-371738"
	Apr 20 00:27:36 ha-371738 kubelet[1375]: I0420 00:27:36.021222    1375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-371738" podStartSLOduration=4.021079157 podStartE2EDuration="4.021079157s" podCreationTimestamp="2024-04-20 00:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-20 00:27:36.01991427 +0000 UTC m=+760.157759244" watchObservedRunningTime="2024-04-20 00:27:36.021079157 +0000 UTC m=+760.158924140"
	Apr 20 00:27:56 ha-371738 kubelet[1375]: E0420 00:27:56.022588    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:27:56 ha-371738 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:27:56 ha-371738 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:27:56 ha-371738 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:27:56 ha-371738 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 00:28:10.008982  102195 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18703-76456/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-371738 -n ha-371738
helpers_test.go:261: (dbg) Run:  kubectl --context ha-371738 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (365.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 stop -v=7 --alsologtostderr
E0420 00:30:27.814929   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-371738 stop -v=7 --alsologtostderr: exit status 82 (2m0.502543051s)

                                                
                                                
-- stdout --
	* Stopping node "ha-371738-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 00:28:30.338920  102599 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:28:30.339193  102599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:28:30.339203  102599 out.go:304] Setting ErrFile to fd 2...
	I0420 00:28:30.339207  102599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:28:30.339411  102599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:28:30.339630  102599 out.go:298] Setting JSON to false
	I0420 00:28:30.339705  102599 mustload.go:65] Loading cluster: ha-371738
	I0420 00:28:30.340066  102599 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:28:30.340161  102599 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:28:30.340334  102599 mustload.go:65] Loading cluster: ha-371738
	I0420 00:28:30.340461  102599 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:28:30.340488  102599 stop.go:39] StopHost: ha-371738-m04
	I0420 00:28:30.340867  102599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:28:30.340926  102599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:28:30.355709  102599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0420 00:28:30.356193  102599 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:28:30.356786  102599 main.go:141] libmachine: Using API Version  1
	I0420 00:28:30.356811  102599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:28:30.357142  102599 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:28:30.359728  102599 out.go:177] * Stopping node "ha-371738-m04"  ...
	I0420 00:28:30.361072  102599 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0420 00:28:30.361112  102599 main.go:141] libmachine: (ha-371738-m04) Calling .DriverName
	I0420 00:28:30.361370  102599 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0420 00:28:30.361418  102599 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHHostname
	I0420 00:28:30.364293  102599 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:28:30.364705  102599 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:27:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:28:30.364734  102599 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:28:30.364847  102599 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHPort
	I0420 00:28:30.365029  102599 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHKeyPath
	I0420 00:28:30.365193  102599 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHUsername
	I0420 00:28:30.365353  102599 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m04/id_rsa Username:docker}
	I0420 00:28:30.456858  102599 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0420 00:28:30.512289  102599 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0420 00:28:30.567062  102599 main.go:141] libmachine: Stopping "ha-371738-m04"...
	I0420 00:28:30.567098  102599 main.go:141] libmachine: (ha-371738-m04) Calling .GetState
	I0420 00:28:30.568633  102599 main.go:141] libmachine: (ha-371738-m04) Calling .Stop
	I0420 00:28:30.572268  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 0/120
	I0420 00:28:31.574547  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 1/120
	I0420 00:28:32.575874  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 2/120
	I0420 00:28:33.577601  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 3/120
	I0420 00:28:34.579959  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 4/120
	I0420 00:28:35.582125  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 5/120
	I0420 00:28:36.584659  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 6/120
	I0420 00:28:37.586031  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 7/120
	I0420 00:28:38.587765  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 8/120
	I0420 00:28:39.589051  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 9/120
	I0420 00:28:40.591135  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 10/120
	I0420 00:28:41.593631  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 11/120
	I0420 00:28:42.595264  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 12/120
	I0420 00:28:43.596816  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 13/120
	I0420 00:28:44.598428  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 14/120
	I0420 00:28:45.600366  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 15/120
	I0420 00:28:46.601666  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 16/120
	I0420 00:28:47.603221  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 17/120
	I0420 00:28:48.605050  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 18/120
	I0420 00:28:49.606650  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 19/120
	I0420 00:28:50.608845  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 20/120
	I0420 00:28:51.610290  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 21/120
	I0420 00:28:52.611682  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 22/120
	I0420 00:28:53.612972  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 23/120
	I0420 00:28:54.614405  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 24/120
	I0420 00:28:55.616275  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 25/120
	I0420 00:28:56.617611  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 26/120
	I0420 00:28:57.619708  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 27/120
	I0420 00:28:58.620970  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 28/120
	I0420 00:28:59.622207  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 29/120
	I0420 00:29:00.624342  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 30/120
	I0420 00:29:01.625698  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 31/120
	I0420 00:29:02.627770  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 32/120
	I0420 00:29:03.629060  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 33/120
	I0420 00:29:04.630324  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 34/120
	I0420 00:29:05.632407  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 35/120
	I0420 00:29:06.634566  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 36/120
	I0420 00:29:07.636804  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 37/120
	I0420 00:29:08.638212  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 38/120
	I0420 00:29:09.639520  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 39/120
	I0420 00:29:10.641455  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 40/120
	I0420 00:29:11.643978  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 41/120
	I0420 00:29:12.645364  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 42/120
	I0420 00:29:13.646887  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 43/120
	I0420 00:29:14.648462  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 44/120
	I0420 00:29:15.651003  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 45/120
	I0420 00:29:16.652616  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 46/120
	I0420 00:29:17.654165  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 47/120
	I0420 00:29:18.655663  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 48/120
	I0420 00:29:19.657024  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 49/120
	I0420 00:29:20.659122  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 50/120
	I0420 00:29:21.660585  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 51/120
	I0420 00:29:22.662007  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 52/120
	I0420 00:29:23.663539  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 53/120
	I0420 00:29:24.665245  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 54/120
	I0420 00:29:25.667038  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 55/120
	I0420 00:29:26.668363  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 56/120
	I0420 00:29:27.670528  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 57/120
	I0420 00:29:28.671887  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 58/120
	I0420 00:29:29.674304  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 59/120
	I0420 00:29:30.676378  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 60/120
	I0420 00:29:31.677945  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 61/120
	I0420 00:29:32.679780  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 62/120
	I0420 00:29:33.681208  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 63/120
	I0420 00:29:34.682661  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 64/120
	I0420 00:29:35.683954  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 65/120
	I0420 00:29:36.686303  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 66/120
	I0420 00:29:37.687565  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 67/120
	I0420 00:29:38.689112  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 68/120
	I0420 00:29:39.690532  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 69/120
	I0420 00:29:40.692422  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 70/120
	I0420 00:29:41.694591  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 71/120
	I0420 00:29:42.695796  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 72/120
	I0420 00:29:43.697657  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 73/120
	I0420 00:29:44.699825  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 74/120
	I0420 00:29:45.701506  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 75/120
	I0420 00:29:46.703694  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 76/120
	I0420 00:29:47.705102  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 77/120
	I0420 00:29:48.706487  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 78/120
	I0420 00:29:49.707969  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 79/120
	I0420 00:29:50.710266  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 80/120
	I0420 00:29:51.711843  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 81/120
	I0420 00:29:52.713180  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 82/120
	I0420 00:29:53.714887  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 83/120
	I0420 00:29:54.716588  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 84/120
	I0420 00:29:55.718512  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 85/120
	I0420 00:29:56.719890  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 86/120
	I0420 00:29:57.721468  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 87/120
	I0420 00:29:58.723101  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 88/120
	I0420 00:29:59.725265  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 89/120
	I0420 00:30:00.727378  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 90/120
	I0420 00:30:01.728673  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 91/120
	I0420 00:30:02.729961  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 92/120
	I0420 00:30:03.732283  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 93/120
	I0420 00:30:04.733705  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 94/120
	I0420 00:30:05.735605  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 95/120
	I0420 00:30:06.737077  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 96/120
	I0420 00:30:07.738509  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 97/120
	I0420 00:30:08.739852  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 98/120
	I0420 00:30:09.741454  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 99/120
	I0420 00:30:10.743524  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 100/120
	I0420 00:30:11.745078  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 101/120
	I0420 00:30:12.746463  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 102/120
	I0420 00:30:13.748247  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 103/120
	I0420 00:30:14.749880  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 104/120
	I0420 00:30:15.751582  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 105/120
	I0420 00:30:16.753137  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 106/120
	I0420 00:30:17.755174  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 107/120
	I0420 00:30:18.756451  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 108/120
	I0420 00:30:19.757767  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 109/120
	I0420 00:30:20.759788  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 110/120
	I0420 00:30:21.761171  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 111/120
	I0420 00:30:22.763508  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 112/120
	I0420 00:30:23.764957  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 113/120
	I0420 00:30:24.766377  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 114/120
	I0420 00:30:25.768291  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 115/120
	I0420 00:30:26.770190  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 116/120
	I0420 00:30:27.771513  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 117/120
	I0420 00:30:28.772932  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 118/120
	I0420 00:30:29.774253  102599 main.go:141] libmachine: (ha-371738-m04) Waiting for machine to stop 119/120
	I0420 00:30:30.774768  102599 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0420 00:30:30.774829  102599 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0420 00:30:30.776807  102599 out.go:177] 
	W0420 00:30:30.778166  102599 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0420 00:30:30.778183  102599 out.go:239] * 
	* 
	W0420 00:30:30.781870  102599 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0420 00:30:30.783293  102599 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-371738 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr: exit status 3 (19.017114327s)

                                                
                                                
-- stdout --
	ha-371738
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371738-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 00:30:30.845703  103057 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:30:30.845946  103057 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:30:30.845954  103057 out.go:304] Setting ErrFile to fd 2...
	I0420 00:30:30.845959  103057 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:30:30.846128  103057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:30:30.846281  103057 out.go:298] Setting JSON to false
	I0420 00:30:30.846308  103057 mustload.go:65] Loading cluster: ha-371738
	I0420 00:30:30.846412  103057 notify.go:220] Checking for updates...
	I0420 00:30:30.846665  103057 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:30:30.846681  103057 status.go:255] checking status of ha-371738 ...
	I0420 00:30:30.847305  103057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:30:30.847380  103057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:30:30.867935  103057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42773
	I0420 00:30:30.868482  103057 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:30:30.868961  103057 main.go:141] libmachine: Using API Version  1
	I0420 00:30:30.868983  103057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:30:30.869399  103057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:30:30.869675  103057 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:30:30.871186  103057 status.go:330] ha-371738 host status = "Running" (err=<nil>)
	I0420 00:30:30.871213  103057 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:30:30.871490  103057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:30:30.871525  103057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:30:30.885678  103057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
	I0420 00:30:30.886016  103057 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:30:30.886513  103057 main.go:141] libmachine: Using API Version  1
	I0420 00:30:30.886535  103057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:30:30.886834  103057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:30:30.887058  103057 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:30:30.890284  103057 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:30:30.890804  103057 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:30:30.890832  103057 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:30:30.890999  103057 host.go:66] Checking if "ha-371738" exists ...
	I0420 00:30:30.891267  103057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:30:30.891298  103057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:30:30.905520  103057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40335
	I0420 00:30:30.905863  103057 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:30:30.906265  103057 main.go:141] libmachine: Using API Version  1
	I0420 00:30:30.906284  103057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:30:30.906581  103057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:30:30.906763  103057 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:30:30.907007  103057 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:30:30.907036  103057 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:30:30.909858  103057 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:30:30.910294  103057 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:30:30.910318  103057 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:30:30.910451  103057 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:30:30.910614  103057 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:30:30.910768  103057 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:30:30.910915  103057 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:30:31.000338  103057 ssh_runner.go:195] Run: systemctl --version
	I0420 00:30:31.010370  103057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:30:31.031648  103057 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:30:31.031675  103057 api_server.go:166] Checking apiserver status ...
	I0420 00:30:31.031705  103057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:30:31.054873  103057 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5158/cgroup
	W0420 00:30:31.067407  103057 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5158/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:30:31.067459  103057 ssh_runner.go:195] Run: ls
	I0420 00:30:31.072873  103057 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:30:31.080821  103057 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:30:31.080855  103057 status.go:422] ha-371738 apiserver status = Running (err=<nil>)
	I0420 00:30:31.080868  103057 status.go:257] ha-371738 status: &{Name:ha-371738 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:30:31.080888  103057 status.go:255] checking status of ha-371738-m02 ...
	I0420 00:30:31.081283  103057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:30:31.081340  103057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:30:31.096045  103057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44371
	I0420 00:30:31.096494  103057 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:30:31.096992  103057 main.go:141] libmachine: Using API Version  1
	I0420 00:30:31.097021  103057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:30:31.097381  103057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:30:31.097607  103057 main.go:141] libmachine: (ha-371738-m02) Calling .GetState
	I0420 00:30:31.099114  103057 status.go:330] ha-371738-m02 host status = "Running" (err=<nil>)
	I0420 00:30:31.099132  103057 host.go:66] Checking if "ha-371738-m02" exists ...
	I0420 00:30:31.099442  103057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:30:31.099484  103057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:30:31.113838  103057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43813
	I0420 00:30:31.114224  103057 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:30:31.114702  103057 main.go:141] libmachine: Using API Version  1
	I0420 00:30:31.114722  103057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:30:31.115122  103057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:30:31.115318  103057 main.go:141] libmachine: (ha-371738-m02) Calling .GetIP
	I0420 00:30:31.117845  103057 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:30:31.118302  103057 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:25:58 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:30:31.118338  103057 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:30:31.118469  103057 host.go:66] Checking if "ha-371738-m02" exists ...
	I0420 00:30:31.118777  103057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:30:31.118813  103057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:30:31.134522  103057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44839
	I0420 00:30:31.134889  103057 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:30:31.135407  103057 main.go:141] libmachine: Using API Version  1
	I0420 00:30:31.135430  103057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:30:31.135780  103057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:30:31.135976  103057 main.go:141] libmachine: (ha-371738-m02) Calling .DriverName
	I0420 00:30:31.136217  103057 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:30:31.136237  103057 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHHostname
	I0420 00:30:31.139013  103057 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:30:31.139453  103057 main.go:141] libmachine: (ha-371738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:c8", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:25:58 +0000 UTC Type:0 Mac:52:54:00:3b:ab:c8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-371738-m02 Clientid:01:52:54:00:3b:ab:c8}
	I0420 00:30:31.139479  103057 main.go:141] libmachine: (ha-371738-m02) DBG | domain ha-371738-m02 has defined IP address 192.168.39.48 and MAC address 52:54:00:3b:ab:c8 in network mk-ha-371738
	I0420 00:30:31.139618  103057 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHPort
	I0420 00:30:31.139825  103057 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHKeyPath
	I0420 00:30:31.139974  103057 main.go:141] libmachine: (ha-371738-m02) Calling .GetSSHUsername
	I0420 00:30:31.140101  103057 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m02/id_rsa Username:docker}
	I0420 00:30:31.228896  103057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:30:31.251362  103057 kubeconfig.go:125] found "ha-371738" server: "https://192.168.39.254:8443"
	I0420 00:30:31.251396  103057 api_server.go:166] Checking apiserver status ...
	I0420 00:30:31.251436  103057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:30:31.268447  103057 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1422/cgroup
	W0420 00:30:31.280076  103057 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1422/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:30:31.280120  103057 ssh_runner.go:195] Run: ls
	I0420 00:30:31.285689  103057 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0420 00:30:31.295490  103057 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0420 00:30:31.295509  103057 status.go:422] ha-371738-m02 apiserver status = Running (err=<nil>)
	I0420 00:30:31.295517  103057 status.go:257] ha-371738-m02 status: &{Name:ha-371738-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:30:31.295531  103057 status.go:255] checking status of ha-371738-m04 ...
	I0420 00:30:31.295833  103057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:30:31.295872  103057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:30:31.312462  103057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35325
	I0420 00:30:31.312873  103057 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:30:31.313387  103057 main.go:141] libmachine: Using API Version  1
	I0420 00:30:31.313406  103057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:30:31.313682  103057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:30:31.313864  103057 main.go:141] libmachine: (ha-371738-m04) Calling .GetState
	I0420 00:30:31.315385  103057 status.go:330] ha-371738-m04 host status = "Running" (err=<nil>)
	I0420 00:30:31.315399  103057 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:30:31.315753  103057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:30:31.315794  103057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:30:31.330944  103057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42711
	I0420 00:30:31.331342  103057 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:30:31.331784  103057 main.go:141] libmachine: Using API Version  1
	I0420 00:30:31.331802  103057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:30:31.332116  103057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:30:31.332362  103057 main.go:141] libmachine: (ha-371738-m04) Calling .GetIP
	I0420 00:30:31.335130  103057 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:30:31.335605  103057 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:27:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:30:31.335637  103057 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:30:31.335799  103057 host.go:66] Checking if "ha-371738-m04" exists ...
	I0420 00:30:31.336107  103057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:30:31.336140  103057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:30:31.350173  103057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37681
	I0420 00:30:31.350553  103057 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:30:31.350946  103057 main.go:141] libmachine: Using API Version  1
	I0420 00:30:31.350971  103057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:30:31.351279  103057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:30:31.351444  103057 main.go:141] libmachine: (ha-371738-m04) Calling .DriverName
	I0420 00:30:31.351672  103057 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:30:31.351690  103057 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHHostname
	I0420 00:30:31.354354  103057 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:30:31.354779  103057 main.go:141] libmachine: (ha-371738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:32:07", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:27:57 +0000 UTC Type:0 Mac:52:54:00:00:32:07 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-371738-m04 Clientid:01:52:54:00:00:32:07}
	I0420 00:30:31.354808  103057 main.go:141] libmachine: (ha-371738-m04) DBG | domain ha-371738-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:00:32:07 in network mk-ha-371738
	I0420 00:30:31.354969  103057 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHPort
	I0420 00:30:31.355138  103057 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHKeyPath
	I0420 00:30:31.355242  103057 main.go:141] libmachine: (ha-371738-m04) Calling .GetSSHUsername
	I0420 00:30:31.355363  103057 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738-m04/id_rsa Username:docker}
	W0420 00:30:49.801592  103057 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.61:22: connect: no route to host
	W0420 00:30:49.801714  103057 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	E0420 00:30:49.801734  103057 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	I0420 00:30:49.801745  103057 status.go:257] ha-371738-m04 status: &{Name:ha-371738-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0420 00:30:49.801772  103057 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-371738 -n ha-371738
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-371738 logs -n 25: (1.877474332s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-371738 ssh -n ha-371738-m02 sudo cat                                          | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m03_ha-371738-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m03:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04:/home/docker/cp-test_ha-371738-m03_ha-371738-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738-m04 sudo cat                                          | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m03_ha-371738-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-371738 cp testdata/cp-test.txt                                                | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3122242891/001/cp-test_ha-371738-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738:/home/docker/cp-test_ha-371738-m04_ha-371738.txt                       |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738 sudo cat                                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m04_ha-371738.txt                                 |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m02:/home/docker/cp-test_ha-371738-m04_ha-371738-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738-m02 sudo cat                                          | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m04_ha-371738-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m03:/home/docker/cp-test_ha-371738-m04_ha-371738-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n                                                                 | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | ha-371738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-371738 ssh -n ha-371738-m03 sudo cat                                          | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC | 20 Apr 24 00:18 UTC |
	|         | /home/docker/cp-test_ha-371738-m04_ha-371738-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-371738 node stop m02 -v=7                                                     | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-371738 node start m02 -v=7                                                    | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-371738 -v=7                                                           | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-371738 -v=7                                                                | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-371738 --wait=true -v=7                                                    | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:24 UTC | 20 Apr 24 00:28 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-371738                                                                | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:28 UTC |                     |
	| node    | ha-371738 node delete m03 -v=7                                                   | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:28 UTC | 20 Apr 24 00:28 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-371738 stop -v=7                                                              | ha-371738 | jenkins | v1.33.0 | 20 Apr 24 00:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 00:24:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 00:24:09.181716  100866 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:24:09.181839  100866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:24:09.181848  100866 out.go:304] Setting ErrFile to fd 2...
	I0420 00:24:09.181853  100866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:24:09.182059  100866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:24:09.182691  100866 out.go:298] Setting JSON to false
	I0420 00:24:09.183586  100866 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":11196,"bootTime":1713561453,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 00:24:09.183653  100866 start.go:139] virtualization: kvm guest
	I0420 00:24:09.186051  100866 out.go:177] * [ha-371738] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 00:24:09.187914  100866 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 00:24:09.187886  100866 notify.go:220] Checking for updates...
	I0420 00:24:09.189340  100866 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 00:24:09.190540  100866 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 00:24:09.191762  100866 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:24:09.192986  100866 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 00:24:09.194367  100866 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 00:24:09.195987  100866 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:24:09.196096  100866 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 00:24:09.196519  100866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:24:09.196571  100866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:24:09.211709  100866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40039
	I0420 00:24:09.212095  100866 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:24:09.212637  100866 main.go:141] libmachine: Using API Version  1
	I0420 00:24:09.212660  100866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:24:09.213016  100866 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:24:09.213243  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:24:09.247629  100866 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 00:24:09.248959  100866 start.go:297] selected driver: kvm2
	I0420 00:24:09.248974  100866 start.go:901] validating driver "kvm2" against &{Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default A
PIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false head
lamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:24:09.249096  100866 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 00:24:09.249431  100866 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 00:24:09.249506  100866 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 00:24:09.263932  100866 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 00:24:09.264832  100866 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:24:09.264907  100866 cni.go:84] Creating CNI manager for ""
	I0420 00:24:09.264930  100866 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0420 00:24:09.265001  100866 start.go:340] cluster config:
	{Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:24:09.265160  100866 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 00:24:09.267560  100866 out.go:177] * Starting "ha-371738" primary control-plane node in "ha-371738" cluster
	I0420 00:24:09.269067  100866 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:24:09.269101  100866 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0420 00:24:09.269112  100866 cache.go:56] Caching tarball of preloaded images
	I0420 00:24:09.269211  100866 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 00:24:09.269223  100866 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 00:24:09.269399  100866 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/config.json ...
	I0420 00:24:09.269606  100866 start.go:360] acquireMachinesLock for ha-371738: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 00:24:09.269671  100866 start.go:364] duration metric: took 45.599µs to acquireMachinesLock for "ha-371738"
	I0420 00:24:09.269692  100866 start.go:96] Skipping create...Using existing machine configuration
	I0420 00:24:09.269702  100866 fix.go:54] fixHost starting: 
	I0420 00:24:09.269954  100866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:24:09.269992  100866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:24:09.283515  100866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37293
	I0420 00:24:09.283948  100866 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:24:09.284473  100866 main.go:141] libmachine: Using API Version  1
	I0420 00:24:09.284499  100866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:24:09.284783  100866 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:24:09.284944  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:24:09.285097  100866 main.go:141] libmachine: (ha-371738) Calling .GetState
	I0420 00:24:09.286700  100866 fix.go:112] recreateIfNeeded on ha-371738: state=Running err=<nil>
	W0420 00:24:09.286718  100866 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 00:24:09.289384  100866 out.go:177] * Updating the running kvm2 "ha-371738" VM ...
	I0420 00:24:09.290616  100866 machine.go:94] provisionDockerMachine start ...
	I0420 00:24:09.290661  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:24:09.290899  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:24:09.293923  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.294440  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:24:09.294469  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.294672  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:24:09.294933  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.295130  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.295291  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:24:09.295487  100866 main.go:141] libmachine: Using SSH client type: native
	I0420 00:24:09.295659  100866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:24:09.295669  100866 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 00:24:09.415198  100866 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-371738
	
	I0420 00:24:09.415234  100866 main.go:141] libmachine: (ha-371738) Calling .GetMachineName
	I0420 00:24:09.415503  100866 buildroot.go:166] provisioning hostname "ha-371738"
	I0420 00:24:09.415534  100866 main.go:141] libmachine: (ha-371738) Calling .GetMachineName
	I0420 00:24:09.415751  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:24:09.418451  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.418843  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:24:09.418879  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.419059  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:24:09.419228  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.419361  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.419450  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:24:09.419554  100866 main.go:141] libmachine: Using SSH client type: native
	I0420 00:24:09.419719  100866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:24:09.419732  100866 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-371738 && echo "ha-371738" | sudo tee /etc/hostname
	I0420 00:24:09.554268  100866 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-371738
	
	I0420 00:24:09.554304  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:24:09.557338  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.557763  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:24:09.557787  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.557981  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:24:09.558195  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.558370  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.558552  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:24:09.558757  100866 main.go:141] libmachine: Using SSH client type: native
	I0420 00:24:09.558933  100866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:24:09.558950  100866 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-371738' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-371738/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-371738' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 00:24:09.678695  100866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:24:09.678735  100866 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 00:24:09.678763  100866 buildroot.go:174] setting up certificates
	I0420 00:24:09.678774  100866 provision.go:84] configureAuth start
	I0420 00:24:09.678783  100866 main.go:141] libmachine: (ha-371738) Calling .GetMachineName
	I0420 00:24:09.679073  100866 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:24:09.681933  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.682332  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:24:09.682362  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.682496  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:24:09.684847  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.685202  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:24:09.685218  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.685367  100866 provision.go:143] copyHostCerts
	I0420 00:24:09.685392  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:24:09.685434  100866 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 00:24:09.685446  100866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:24:09.685532  100866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 00:24:09.685628  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:24:09.685646  100866 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 00:24:09.685650  100866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:24:09.685679  100866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 00:24:09.685799  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:24:09.685821  100866 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 00:24:09.685826  100866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:24:09.685852  100866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 00:24:09.685904  100866 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.ha-371738 san=[127.0.0.1 192.168.39.217 ha-371738 localhost minikube]
	I0420 00:24:09.808106  100866 provision.go:177] copyRemoteCerts
	I0420 00:24:09.808171  100866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 00:24:09.808195  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:24:09.810886  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.811262  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:24:09.811284  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.811508  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:24:09.811725  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.811873  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:24:09.812010  100866 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:24:09.905459  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0420 00:24:09.905522  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 00:24:09.936017  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0420 00:24:09.936102  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0420 00:24:09.964960  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0420 00:24:09.965014  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 00:24:09.993992  100866 provision.go:87] duration metric: took 315.201574ms to configureAuth
	I0420 00:24:09.994021  100866 buildroot.go:189] setting minikube options for container-runtime
	I0420 00:24:09.994263  100866 config.go:182] Loaded profile config "ha-371738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:24:09.994345  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:24:09.997101  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.997564  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:24:09.997593  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:24:09.997806  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:24:09.998000  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.998178  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:24:09.998328  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:24:09.998485  100866 main.go:141] libmachine: Using SSH client type: native
	I0420 00:24:09.998673  100866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:24:09.998690  100866 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 00:25:40.874739  100866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 00:25:40.874774  100866 machine.go:97] duration metric: took 1m31.584144057s to provisionDockerMachine
	I0420 00:25:40.874789  100866 start.go:293] postStartSetup for "ha-371738" (driver="kvm2")
	I0420 00:25:40.874799  100866 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 00:25:40.874816  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:25:40.875198  100866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 00:25:40.875235  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:25:40.878657  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:40.879190  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:25:40.879218  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:40.879373  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:25:40.879593  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:25:40.879866  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:25:40.880030  100866 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:25:40.969290  100866 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 00:25:40.974265  100866 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 00:25:40.974295  100866 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 00:25:40.974367  100866 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 00:25:40.974461  100866 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 00:25:40.974476  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /etc/ssl/certs/837422.pem
	I0420 00:25:40.974587  100866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 00:25:40.985507  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:25:41.013076  100866 start.go:296] duration metric: took 138.273677ms for postStartSetup
	I0420 00:25:41.013130  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:25:41.013503  100866 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0420 00:25:41.013529  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:25:41.016493  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.016922  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:25:41.016951  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.017096  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:25:41.017347  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:25:41.017495  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:25:41.017621  100866 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	W0420 00:25:41.103730  100866 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0420 00:25:41.103755  100866 fix.go:56] duration metric: took 1m31.834054018s for fixHost
	I0420 00:25:41.103776  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:25:41.106241  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.106734  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:25:41.106764  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.106894  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:25:41.107083  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:25:41.107254  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:25:41.107455  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:25:41.107603  100866 main.go:141] libmachine: Using SSH client type: native
	I0420 00:25:41.107814  100866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0420 00:25:41.107826  100866 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 00:25:41.222515  100866 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713572741.184353502
	
	I0420 00:25:41.222547  100866 fix.go:216] guest clock: 1713572741.184353502
	I0420 00:25:41.222558  100866 fix.go:229] Guest: 2024-04-20 00:25:41.184353502 +0000 UTC Remote: 2024-04-20 00:25:41.103762097 +0000 UTC m=+91.973048737 (delta=80.591405ms)
	I0420 00:25:41.222588  100866 fix.go:200] guest clock delta is within tolerance: 80.591405ms
	I0420 00:25:41.222597  100866 start.go:83] releasing machines lock for "ha-371738", held for 1m31.952913361s
	I0420 00:25:41.222626  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:25:41.222989  100866 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:25:41.225645  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.226091  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:25:41.226125  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.226272  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:25:41.226784  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:25:41.226979  100866 main.go:141] libmachine: (ha-371738) Calling .DriverName
	I0420 00:25:41.227071  100866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 00:25:41.227117  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:25:41.227217  100866 ssh_runner.go:195] Run: cat /version.json
	I0420 00:25:41.227248  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHHostname
	I0420 00:25:41.229815  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.230132  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:25:41.230172  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.230196  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.230306  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:25:41.230510  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:25:41.230566  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:25:41.230591  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:41.230676  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:25:41.230750  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHPort
	I0420 00:25:41.230829  100866 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:25:41.230891  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHKeyPath
	I0420 00:25:41.231043  100866 main.go:141] libmachine: (ha-371738) Calling .GetSSHUsername
	I0420 00:25:41.231187  100866 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/ha-371738/id_rsa Username:docker}
	I0420 00:25:41.316002  100866 ssh_runner.go:195] Run: systemctl --version
	I0420 00:25:41.343209  100866 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 00:25:41.524166  100866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 00:25:41.537921  100866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 00:25:41.537981  100866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 00:25:41.547984  100866 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0420 00:25:41.548010  100866 start.go:494] detecting cgroup driver to use...
	I0420 00:25:41.548078  100866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 00:25:41.566253  100866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 00:25:41.580587  100866 docker.go:217] disabling cri-docker service (if available) ...
	I0420 00:25:41.580641  100866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 00:25:41.594871  100866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 00:25:41.609428  100866 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 00:25:41.816703  100866 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 00:25:41.992664  100866 docker.go:233] disabling docker service ...
	I0420 00:25:41.992750  100866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 00:25:42.014406  100866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 00:25:42.043040  100866 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 00:25:42.227819  100866 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 00:25:42.416473  100866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 00:25:42.438563  100866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 00:25:42.461068  100866 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 00:25:42.461138  100866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:25:42.472488  100866 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 00:25:42.472556  100866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:25:42.483609  100866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:25:42.494702  100866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:25:42.505772  100866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 00:25:42.517260  100866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:25:42.528580  100866 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:25:42.540491  100866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:25:42.551997  100866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 00:25:42.562715  100866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 00:25:42.573048  100866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:25:42.737013  100866 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 00:25:43.138669  100866 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 00:25:43.138738  100866 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 00:25:43.145229  100866 start.go:562] Will wait 60s for crictl version
	I0420 00:25:43.145292  100866 ssh_runner.go:195] Run: which crictl
	I0420 00:25:43.150054  100866 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 00:25:43.194670  100866 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 00:25:43.194763  100866 ssh_runner.go:195] Run: crio --version
	I0420 00:25:43.226380  100866 ssh_runner.go:195] Run: crio --version
	I0420 00:25:43.261092  100866 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 00:25:43.262436  100866 main.go:141] libmachine: (ha-371738) Calling .GetIP
	I0420 00:25:43.264949  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:43.265302  100866 main.go:141] libmachine: (ha-371738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:22:29", ip: ""} in network mk-ha-371738: {Iface:virbr1 ExpiryTime:2024-04-20 01:14:26 +0000 UTC Type:0 Mac:52:54:00:a2:22:29 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-371738 Clientid:01:52:54:00:a2:22:29}
	I0420 00:25:43.265341  100866 main.go:141] libmachine: (ha-371738) DBG | domain ha-371738 has defined IP address 192.168.39.217 and MAC address 52:54:00:a2:22:29 in network mk-ha-371738
	I0420 00:25:43.265542  100866 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0420 00:25:43.270790  100866 kubeadm.go:877] updating cluster {Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:1
92.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 00:25:43.270924  100866 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:25:43.270973  100866 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:25:43.320649  100866 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 00:25:43.320672  100866 crio.go:433] Images already preloaded, skipping extraction
	I0420 00:25:43.320722  100866 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:25:43.436669  100866 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 00:25:43.436699  100866 cache_images.go:84] Images are preloaded, skipping loading
	I0420 00:25:43.436712  100866 kubeadm.go:928] updating node { 192.168.39.217 8443 v1.30.0 crio true true} ...
	I0420 00:25:43.436849  100866 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-371738 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 00:25:43.436939  100866 ssh_runner.go:195] Run: crio config
	I0420 00:25:43.643509  100866 cni.go:84] Creating CNI manager for ""
	I0420 00:25:43.643532  100866 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0420 00:25:43.643545  100866 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 00:25:43.643572  100866 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-371738 NodeName:ha-371738 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 00:25:43.643767  100866 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-371738"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 00:25:43.643793  100866 kube-vip.go:111] generating kube-vip config ...
	I0420 00:25:43.643860  100866 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0420 00:25:43.688493  100866 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0420 00:25:43.688677  100866 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0420 00:25:43.688755  100866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 00:25:43.798966  100866 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 00:25:43.799047  100866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0420 00:25:43.991880  100866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0420 00:25:44.158652  100866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 00:25:44.448421  100866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0420 00:25:44.490941  100866 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0420 00:25:44.550137  100866 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0420 00:25:44.563870  100866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:25:44.913689  100866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:25:45.070624  100866 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738 for IP: 192.168.39.217
	I0420 00:25:45.070649  100866 certs.go:194] generating shared ca certs ...
	I0420 00:25:45.070670  100866 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:25:45.070836  100866 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 00:25:45.070888  100866 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 00:25:45.070902  100866 certs.go:256] generating profile certs ...
	I0420 00:25:45.071126  100866 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/client.key
	I0420 00:25:45.071171  100866 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.6d7bd836
	I0420 00:25:45.071195  100866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.6d7bd836 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.48 192.168.39.253 192.168.39.254]
	I0420 00:25:45.153131  100866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.6d7bd836 ...
	I0420 00:25:45.153160  100866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.6d7bd836: {Name:mkbc37b952bd6a1c868dc8556da9c440274c7ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:25:45.153333  100866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.6d7bd836 ...
	I0420 00:25:45.153344  100866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.6d7bd836: {Name:mk5ee993a93be74423a5fcd6d4233c0e060bec55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:25:45.153416  100866 certs.go:381] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt.6d7bd836 -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt
	I0420 00:25:45.153567  100866 certs.go:385] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key.6d7bd836 -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key
	I0420 00:25:45.153697  100866 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key
	I0420 00:25:45.153714  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0420 00:25:45.153728  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0420 00:25:45.153741  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0420 00:25:45.153754  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0420 00:25:45.153766  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0420 00:25:45.153778  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0420 00:25:45.153789  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0420 00:25:45.153801  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0420 00:25:45.153851  100866 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 00:25:45.153878  100866 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 00:25:45.153889  100866 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 00:25:45.153911  100866 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 00:25:45.153932  100866 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 00:25:45.153953  100866 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 00:25:45.153989  100866 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:25:45.154017  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /usr/share/ca-certificates/837422.pem
	I0420 00:25:45.154031  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:25:45.154044  100866 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem -> /usr/share/ca-certificates/83742.pem
	I0420 00:25:45.154672  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 00:25:45.251999  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 00:25:45.297557  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 00:25:45.334860  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 00:25:45.382519  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0420 00:25:45.417717  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0420 00:25:45.452263  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 00:25:45.485507  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/ha-371738/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 00:25:45.516847  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 00:25:45.546745  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 00:25:45.578765  100866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 00:25:45.614949  100866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 00:25:45.638686  100866 ssh_runner.go:195] Run: openssl version
	I0420 00:25:45.645956  100866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 00:25:45.662510  100866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 00:25:45.667960  100866 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 00:25:45.668017  100866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 00:25:45.676567  100866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 00:25:45.692548  100866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 00:25:45.707809  100866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 00:25:45.713864  100866 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 00:25:45.713915  100866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 00:25:45.720730  100866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 00:25:45.734551  100866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 00:25:45.748291  100866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:25:45.753579  100866 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:25:45.753629  100866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:25:45.763413  100866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 00:25:45.777863  100866 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 00:25:45.783623  100866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 00:25:45.791284  100866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 00:25:45.799812  100866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 00:25:45.807849  100866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 00:25:45.814342  100866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 00:25:45.820697  100866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 00:25:45.827377  100866 kubeadm.go:391] StartCluster: {Name:ha-371738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-371738 Namespace:default APIServerHAVIP:192.
168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false hel
m-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:25:45.827481  100866 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 00:25:45.827534  100866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 00:25:45.918370  100866 cri.go:89] found id: "988e0e16ce68f3643b9af0d65ea12352ab0639e773ab1352b0a8151eb51f8626"
	I0420 00:25:45.918397  100866 cri.go:89] found id: "8657d9bf44d968fb405af2d73a04c2887cf209c19811cc20256b9f4e6230c71a"
	I0420 00:25:45.918403  100866 cri.go:89] found id: "b501a33161b99652e0e199689a5c78dd689f7e56b62656760965fdca22ec9e6f"
	I0420 00:25:45.918408  100866 cri.go:89] found id: "14e36bfb114f2bd2d7fc4262b41df0df3a85d79e4c6a533577e909a0e46e0a80"
	I0420 00:25:45.918412  100866 cri.go:89] found id: "97b8c9a163319f08eb69e441dc04e555623c9a6fef77426e633b17dfe6ca7748"
	I0420 00:25:45.918416  100866 cri.go:89] found id: "323c8a3e2bb2ea2ed1ecc1b2b0394c0a2f8bd196950bfff76a8d5d6292d348bb"
	I0420 00:25:45.918420  100866 cri.go:89] found id: "7503e9373d138e5d2e23128934a5da5fd17cde8052cfdc2ccb8ea63ef43b5d37"
	I0420 00:25:45.918423  100866 cri.go:89] found id: "0aa3be068585a3c4374d1e7e092e2ec838b2b02090d194e30d2c924030aa9509"
	I0420 00:25:45.918427  100866 cri.go:89] found id: "b90a66605161e3e4d5e9517fd1c01ea9e9aa4354ad448006470591f9cb7eb927"
	I0420 00:25:45.918439  100866 cri.go:89] found id: "0f3c494d087c330d86ae9524f1ee4fbc2f2b52dc84c7babc65df9d0767fc394d"
	I0420 00:25:45.918444  100866 cri.go:89] found id: "0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf"
	I0420 00:25:45.918448  100866 cri.go:89] found id: "a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c"
	I0420 00:25:45.918452  100866 cri.go:89] found id: "484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c"
	I0420 00:25:45.918456  100866 cri.go:89] found id: "c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0"
	I0420 00:25:45.918462  100866 cri.go:89] found id: "f163f250149afb625b34cc67c2a85b657a6c38717a194973b7406caf8b71afdb"
	I0420 00:25:45.918466  100866 cri.go:89] found id: "c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9"
	I0420 00:25:45.918473  100866 cri.go:89] found id: ""
	I0420 00:25:45.918532  100866 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.484322857Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713573050484300818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3118f7ca-065f-4462-bb5a-0040491d3c06 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.484925684Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea7662ff-8d39-4fd9-bbaa-6921719e8b92 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.484977087Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea7662ff-8d39-4fd9-bbaa-6921719e8b92 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.485536440Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac855ed56c2e46b910c6f0e29306cd74994e0b52011d6e705d82d492d9434235,PodSandboxId:1b488473663a90a6cc14f775612ba568db9924f2e8ec0e9d52049e5b6da10ce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713572843021809392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6336035c08e95e415f73490de9a3dab8a520c846415666117bb1e7e4ff497e1d,PodSandboxId:6176436eca7ffb5fff720d942de5cf0c751e7751f942a46f8b3e9c39211da722,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713572787014729451,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernetes.container.hash: dd367de8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919081b91e5bbc00fd283a7cc9a6268f1e14b692b56b061e2b21b046a9580fd9,PodSandboxId:9ab77018e92d9ff6ca9244fcd5466f24520859fc0aa3f4bc93eb08f6d0787568,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713572784013589195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9a7bfac3a904c015a4a564bcd8fa8210619f2e14534023a7f93fe8f1c138,PodSandboxId:36cad30ca97d8fe4bf9874a7fee12528dfa08094b07de876ba9d2fd93999d58e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713572777433824081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91975a1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ef31e00ee2c05d749141026c75ec4447f79a380ae00cfbe806380a29e63c58,PodSandboxId:8b198b23f71838e7b14bc8c4e3b718bb1c6d216ffaa5ec219995ea9b4f4c7c7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713572776618832806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotations:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3eebe566cbfae552db6d06e3ce878cd578cace4ac3b8b5be71bf4bd9ff6666a,PodSandboxId:20793aefea29bb8e40db2a4ce691cdd3630bf764651d365aed83ed198fe8e024,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713572762105875303,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68c3bb304e506c6468b1d2cd5dcafae,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:c79aa579b0cae6036a46a7b1aaa75e08bbf35641467073d52465b5f88d81b40d,PodSandboxId:e56329660a79e2c3c8e44ab2dfd633e9f0e3186ad73a4941fd477d411d87249a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713572744236067312,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:9744a7e64b6dfe1c7ddf060cff8b03b0cf3023d30edce9ba4525f223b1cd0b94,PodSandboxId:1b488473663a90a6cc14f775612ba568db9924f2e8ec0e9d52049e5b6da10ce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713572744356481192,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:988e0e16c
e68f3643b9af0d65ea12352ab0639e773ab1352b0a8151eb51f8626,PodSandboxId:0ffa8680c564a324c1dade6f45f502434ec574f667d3abb7075c5524977129a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572744843997907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8657d9bf44d968fb405af2d73a04c2887cf209c19811cc20256b9f4e6230c71a,PodSandboxId:0e08629c20763929cb9013c6a0c063dd3d2ef275020516b2b9618b2e44aaca3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572744831575113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e36bfb114f2bd2d7fc4262b41df0df3a85d79e4c6a533577e909a0e46e0a80,PodSandboxId:3fae8202a1068a4fafc249c58f659b6878754ca5319a2becf15f0a93fff5631f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713572744185281385,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b501a33161b99652e0e199689a5c78dd689f7e56b62656760965fdca22ec9e6f,PodSandboxId:077e956a7dccc6b6b9caf01533ba20b013a217e51a01b45d743b560615453526,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713572744205398446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af96
16002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b8c9a163319f08eb69e441dc04e555623c9a6fef77426e633b17dfe6ca7748,PodSandboxId:8b198b23f71838e7b14bc8c4e3b718bb1c6d216ffaa5ec219995ea9b4f4c7c7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713572744006298165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotation
s:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c8a3e2bb2ea2ed1ecc1b2b0394c0a2f8bd196950bfff76a8d5d6292d348bb,PodSandboxId:9ab77018e92d9ff6ca9244fcd5466f24520859fc0aa3f4bc93eb08f6d0787568,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713572743991218482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annota
tions:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7503e9373d138e5d2e23128934a5da5fd17cde8052cfdc2ccb8ea63ef43b5d37,PodSandboxId:a7ec7ade955ff3362448ed7381df73019003e38d57dc42f24b3e4dffda16cff2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713572742041253686,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernet
es.container.hash: dd367de8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee362bb57c39b48e30473ee01be65a12508f89000c04664e9d4cb00eead48881,PodSandboxId:2952502d79ed7046fb6c936e2cdcaac06d274a1af6bb0f72625bb9c7849a53af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713572256398294941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes
.container.hash: 91975a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf,PodSandboxId:96b6f46faf7987503503c406f518a352cf828470aaa2857fdc4e9580eee7d3ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713572112401908790,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c,PodSandboxId:6951735c94141fbea313e44ff72fab10529f03b1ba6dc664543c35ed8b0e7c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713572112310427901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c,PodSandboxId:78d8eb3f68b710cf8ae3ebc45873b48e07019b5e4d7efd0b56e62a4513be110c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431f
ceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713572108700751072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0,PodSandboxId:6b52fa6b93c1b7e8f8537088635da6d0cb7b5bb9091002379c8f7b848af01e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691
a75a899,State:CONTAINER_EXITED,CreatedAt:1713572087052357246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9,PodSandboxId:6c0d855406f87897ca0924505087fcfdf3cb0d5eaf2fcde6c237b42f6d3ffd82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:17
13572086953871804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea7662ff-8d39-4fd9-bbaa-6921719e8b92 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.544614821Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f513b3d3-eee9-4c0b-9cb5-86321fd1e34d name=/runtime.v1.RuntimeService/Version
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.544759037Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f513b3d3-eee9-4c0b-9cb5-86321fd1e34d name=/runtime.v1.RuntimeService/Version
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.546429067Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9117d6b-4f2a-4843-a02d-e49c48dbe7eb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.547075679Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713573050547044622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9117d6b-4f2a-4843-a02d-e49c48dbe7eb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.548233362Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2bb060c3-e70f-46df-9d4d-e6610e190020 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.548354293Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2bb060c3-e70f-46df-9d4d-e6610e190020 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.548956159Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac855ed56c2e46b910c6f0e29306cd74994e0b52011d6e705d82d492d9434235,PodSandboxId:1b488473663a90a6cc14f775612ba568db9924f2e8ec0e9d52049e5b6da10ce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713572843021809392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6336035c08e95e415f73490de9a3dab8a520c846415666117bb1e7e4ff497e1d,PodSandboxId:6176436eca7ffb5fff720d942de5cf0c751e7751f942a46f8b3e9c39211da722,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713572787014729451,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernetes.container.hash: dd367de8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919081b91e5bbc00fd283a7cc9a6268f1e14b692b56b061e2b21b046a9580fd9,PodSandboxId:9ab77018e92d9ff6ca9244fcd5466f24520859fc0aa3f4bc93eb08f6d0787568,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713572784013589195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9a7bfac3a904c015a4a564bcd8fa8210619f2e14534023a7f93fe8f1c138,PodSandboxId:36cad30ca97d8fe4bf9874a7fee12528dfa08094b07de876ba9d2fd93999d58e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713572777433824081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91975a1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ef31e00ee2c05d749141026c75ec4447f79a380ae00cfbe806380a29e63c58,PodSandboxId:8b198b23f71838e7b14bc8c4e3b718bb1c6d216ffaa5ec219995ea9b4f4c7c7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713572776618832806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotations:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3eebe566cbfae552db6d06e3ce878cd578cace4ac3b8b5be71bf4bd9ff6666a,PodSandboxId:20793aefea29bb8e40db2a4ce691cdd3630bf764651d365aed83ed198fe8e024,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713572762105875303,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68c3bb304e506c6468b1d2cd5dcafae,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:c79aa579b0cae6036a46a7b1aaa75e08bbf35641467073d52465b5f88d81b40d,PodSandboxId:e56329660a79e2c3c8e44ab2dfd633e9f0e3186ad73a4941fd477d411d87249a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713572744236067312,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:9744a7e64b6dfe1c7ddf060cff8b03b0cf3023d30edce9ba4525f223b1cd0b94,PodSandboxId:1b488473663a90a6cc14f775612ba568db9924f2e8ec0e9d52049e5b6da10ce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713572744356481192,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:988e0e16c
e68f3643b9af0d65ea12352ab0639e773ab1352b0a8151eb51f8626,PodSandboxId:0ffa8680c564a324c1dade6f45f502434ec574f667d3abb7075c5524977129a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572744843997907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8657d9bf44d968fb405af2d73a04c2887cf209c19811cc20256b9f4e6230c71a,PodSandboxId:0e08629c20763929cb9013c6a0c063dd3d2ef275020516b2b9618b2e44aaca3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572744831575113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e36bfb114f2bd2d7fc4262b41df0df3a85d79e4c6a533577e909a0e46e0a80,PodSandboxId:3fae8202a1068a4fafc249c58f659b6878754ca5319a2becf15f0a93fff5631f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713572744185281385,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b501a33161b99652e0e199689a5c78dd689f7e56b62656760965fdca22ec9e6f,PodSandboxId:077e956a7dccc6b6b9caf01533ba20b013a217e51a01b45d743b560615453526,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713572744205398446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af96
16002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b8c9a163319f08eb69e441dc04e555623c9a6fef77426e633b17dfe6ca7748,PodSandboxId:8b198b23f71838e7b14bc8c4e3b718bb1c6d216ffaa5ec219995ea9b4f4c7c7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713572744006298165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotation
s:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c8a3e2bb2ea2ed1ecc1b2b0394c0a2f8bd196950bfff76a8d5d6292d348bb,PodSandboxId:9ab77018e92d9ff6ca9244fcd5466f24520859fc0aa3f4bc93eb08f6d0787568,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713572743991218482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annota
tions:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7503e9373d138e5d2e23128934a5da5fd17cde8052cfdc2ccb8ea63ef43b5d37,PodSandboxId:a7ec7ade955ff3362448ed7381df73019003e38d57dc42f24b3e4dffda16cff2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713572742041253686,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernet
es.container.hash: dd367de8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee362bb57c39b48e30473ee01be65a12508f89000c04664e9d4cb00eead48881,PodSandboxId:2952502d79ed7046fb6c936e2cdcaac06d274a1af6bb0f72625bb9c7849a53af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713572256398294941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes
.container.hash: 91975a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf,PodSandboxId:96b6f46faf7987503503c406f518a352cf828470aaa2857fdc4e9580eee7d3ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713572112401908790,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c,PodSandboxId:6951735c94141fbea313e44ff72fab10529f03b1ba6dc664543c35ed8b0e7c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713572112310427901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c,PodSandboxId:78d8eb3f68b710cf8ae3ebc45873b48e07019b5e4d7efd0b56e62a4513be110c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431f
ceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713572108700751072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0,PodSandboxId:6b52fa6b93c1b7e8f8537088635da6d0cb7b5bb9091002379c8f7b848af01e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691
a75a899,State:CONTAINER_EXITED,CreatedAt:1713572087052357246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9,PodSandboxId:6c0d855406f87897ca0924505087fcfdf3cb0d5eaf2fcde6c237b42f6d3ffd82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:17
13572086953871804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2bb060c3-e70f-46df-9d4d-e6610e190020 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.619292049Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=344b24e8-9e04-415a-9c49-cec3dbd7d200 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.619366992Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=344b24e8-9e04-415a-9c49-cec3dbd7d200 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.620430929Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42d0fe22-3df9-477a-a85c-339e8d9622ff name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.620888693Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713573050620863779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42d0fe22-3df9-477a-a85c-339e8d9622ff name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.621447098Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3978aeab-6255-470d-9664-ead3ae633b4d name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.621537439Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3978aeab-6255-470d-9664-ead3ae633b4d name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.622060533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac855ed56c2e46b910c6f0e29306cd74994e0b52011d6e705d82d492d9434235,PodSandboxId:1b488473663a90a6cc14f775612ba568db9924f2e8ec0e9d52049e5b6da10ce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713572843021809392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6336035c08e95e415f73490de9a3dab8a520c846415666117bb1e7e4ff497e1d,PodSandboxId:6176436eca7ffb5fff720d942de5cf0c751e7751f942a46f8b3e9c39211da722,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713572787014729451,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernetes.container.hash: dd367de8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919081b91e5bbc00fd283a7cc9a6268f1e14b692b56b061e2b21b046a9580fd9,PodSandboxId:9ab77018e92d9ff6ca9244fcd5466f24520859fc0aa3f4bc93eb08f6d0787568,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713572784013589195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9a7bfac3a904c015a4a564bcd8fa8210619f2e14534023a7f93fe8f1c138,PodSandboxId:36cad30ca97d8fe4bf9874a7fee12528dfa08094b07de876ba9d2fd93999d58e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713572777433824081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91975a1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ef31e00ee2c05d749141026c75ec4447f79a380ae00cfbe806380a29e63c58,PodSandboxId:8b198b23f71838e7b14bc8c4e3b718bb1c6d216ffaa5ec219995ea9b4f4c7c7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713572776618832806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotations:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3eebe566cbfae552db6d06e3ce878cd578cace4ac3b8b5be71bf4bd9ff6666a,PodSandboxId:20793aefea29bb8e40db2a4ce691cdd3630bf764651d365aed83ed198fe8e024,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713572762105875303,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68c3bb304e506c6468b1d2cd5dcafae,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:c79aa579b0cae6036a46a7b1aaa75e08bbf35641467073d52465b5f88d81b40d,PodSandboxId:e56329660a79e2c3c8e44ab2dfd633e9f0e3186ad73a4941fd477d411d87249a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713572744236067312,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:9744a7e64b6dfe1c7ddf060cff8b03b0cf3023d30edce9ba4525f223b1cd0b94,PodSandboxId:1b488473663a90a6cc14f775612ba568db9924f2e8ec0e9d52049e5b6da10ce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713572744356481192,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:988e0e16c
e68f3643b9af0d65ea12352ab0639e773ab1352b0a8151eb51f8626,PodSandboxId:0ffa8680c564a324c1dade6f45f502434ec574f667d3abb7075c5524977129a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572744843997907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8657d9bf44d968fb405af2d73a04c2887cf209c19811cc20256b9f4e6230c71a,PodSandboxId:0e08629c20763929cb9013c6a0c063dd3d2ef275020516b2b9618b2e44aaca3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572744831575113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e36bfb114f2bd2d7fc4262b41df0df3a85d79e4c6a533577e909a0e46e0a80,PodSandboxId:3fae8202a1068a4fafc249c58f659b6878754ca5319a2becf15f0a93fff5631f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713572744185281385,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b501a33161b99652e0e199689a5c78dd689f7e56b62656760965fdca22ec9e6f,PodSandboxId:077e956a7dccc6b6b9caf01533ba20b013a217e51a01b45d743b560615453526,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713572744205398446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af96
16002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b8c9a163319f08eb69e441dc04e555623c9a6fef77426e633b17dfe6ca7748,PodSandboxId:8b198b23f71838e7b14bc8c4e3b718bb1c6d216ffaa5ec219995ea9b4f4c7c7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713572744006298165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotation
s:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c8a3e2bb2ea2ed1ecc1b2b0394c0a2f8bd196950bfff76a8d5d6292d348bb,PodSandboxId:9ab77018e92d9ff6ca9244fcd5466f24520859fc0aa3f4bc93eb08f6d0787568,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713572743991218482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annota
tions:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7503e9373d138e5d2e23128934a5da5fd17cde8052cfdc2ccb8ea63ef43b5d37,PodSandboxId:a7ec7ade955ff3362448ed7381df73019003e38d57dc42f24b3e4dffda16cff2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713572742041253686,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernet
es.container.hash: dd367de8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee362bb57c39b48e30473ee01be65a12508f89000c04664e9d4cb00eead48881,PodSandboxId:2952502d79ed7046fb6c936e2cdcaac06d274a1af6bb0f72625bb9c7849a53af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713572256398294941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes
.container.hash: 91975a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf,PodSandboxId:96b6f46faf7987503503c406f518a352cf828470aaa2857fdc4e9580eee7d3ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713572112401908790,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c,PodSandboxId:6951735c94141fbea313e44ff72fab10529f03b1ba6dc664543c35ed8b0e7c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713572112310427901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c,PodSandboxId:78d8eb3f68b710cf8ae3ebc45873b48e07019b5e4d7efd0b56e62a4513be110c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431f
ceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713572108700751072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0,PodSandboxId:6b52fa6b93c1b7e8f8537088635da6d0cb7b5bb9091002379c8f7b848af01e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691
a75a899,State:CONTAINER_EXITED,CreatedAt:1713572087052357246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9,PodSandboxId:6c0d855406f87897ca0924505087fcfdf3cb0d5eaf2fcde6c237b42f6d3ffd82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:17
13572086953871804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3978aeab-6255-470d-9664-ead3ae633b4d name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.672827147Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf51f9dc-7199-4640-a358-59204783c937 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.672936184Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf51f9dc-7199-4640-a358-59204783c937 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.674636593Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=75923864-1ec1-4b93-92e2-484f61e2c856 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.675028441Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713573050675008405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75923864-1ec1-4b93-92e2-484f61e2c856 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.675802371Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=712f9d13-750f-4e38-b359-e4929f43a755 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.675890395Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=712f9d13-750f-4e38-b359-e4929f43a755 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:30:50 ha-371738 crio[3972]: time="2024-04-20 00:30:50.676503452Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac855ed56c2e46b910c6f0e29306cd74994e0b52011d6e705d82d492d9434235,PodSandboxId:1b488473663a90a6cc14f775612ba568db9924f2e8ec0e9d52049e5b6da10ce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713572843021809392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6336035c08e95e415f73490de9a3dab8a520c846415666117bb1e7e4ff497e1d,PodSandboxId:6176436eca7ffb5fff720d942de5cf0c751e7751f942a46f8b3e9c39211da722,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713572787014729451,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernetes.container.hash: dd367de8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919081b91e5bbc00fd283a7cc9a6268f1e14b692b56b061e2b21b046a9580fd9,PodSandboxId:9ab77018e92d9ff6ca9244fcd5466f24520859fc0aa3f4bc93eb08f6d0787568,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713572784013589195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9a7bfac3a904c015a4a564bcd8fa8210619f2e14534023a7f93fe8f1c138,PodSandboxId:36cad30ca97d8fe4bf9874a7fee12528dfa08094b07de876ba9d2fd93999d58e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713572777433824081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91975a1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ef31e00ee2c05d749141026c75ec4447f79a380ae00cfbe806380a29e63c58,PodSandboxId:8b198b23f71838e7b14bc8c4e3b718bb1c6d216ffaa5ec219995ea9b4f4c7c7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713572776618832806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotations:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3eebe566cbfae552db6d06e3ce878cd578cace4ac3b8b5be71bf4bd9ff6666a,PodSandboxId:20793aefea29bb8e40db2a4ce691cdd3630bf764651d365aed83ed198fe8e024,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713572762105875303,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68c3bb304e506c6468b1d2cd5dcafae,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:c79aa579b0cae6036a46a7b1aaa75e08bbf35641467073d52465b5f88d81b40d,PodSandboxId:e56329660a79e2c3c8e44ab2dfd633e9f0e3186ad73a4941fd477d411d87249a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713572744236067312,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:9744a7e64b6dfe1c7ddf060cff8b03b0cf3023d30edce9ba4525f223b1cd0b94,PodSandboxId:1b488473663a90a6cc14f775612ba568db9924f2e8ec0e9d52049e5b6da10ce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713572744356481192,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7b89d3-7cff-4258-8215-819971fa1b81,},Annotations:map[string]string{io.kubernetes.container.hash: 7fe79245,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:988e0e16c
e68f3643b9af0d65ea12352ab0639e773ab1352b0a8151eb51f8626,PodSandboxId:0ffa8680c564a324c1dade6f45f502434ec574f667d3abb7075c5524977129a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572744843997907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8657d9bf44d968fb405af2d73a04c2887cf209c19811cc20256b9f4e6230c71a,PodSandboxId:0e08629c20763929cb9013c6a0c063dd3d2ef275020516b2b9618b2e44aaca3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713572744831575113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e36bfb114f2bd2d7fc4262b41df0df3a85d79e4c6a533577e909a0e46e0a80,PodSandboxId:3fae8202a1068a4fafc249c58f659b6878754ca5319a2becf15f0a93fff5631f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713572744185281385,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b501a33161b99652e0e199689a5c78dd689f7e56b62656760965fdca22ec9e6f,PodSandboxId:077e956a7dccc6b6b9caf01533ba20b013a217e51a01b45d743b560615453526,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713572744205398446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af96
16002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b8c9a163319f08eb69e441dc04e555623c9a6fef77426e633b17dfe6ca7748,PodSandboxId:8b198b23f71838e7b14bc8c4e3b718bb1c6d216ffaa5ec219995ea9b4f4c7c7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713572744006298165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b49388f5cf8c9385067a8ba08572fa8a,},Annotation
s:map[string]string{io.kubernetes.container.hash: 929f4a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c8a3e2bb2ea2ed1ecc1b2b0394c0a2f8bd196950bfff76a8d5d6292d348bb,PodSandboxId:9ab77018e92d9ff6ca9244fcd5466f24520859fc0aa3f4bc93eb08f6d0787568,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713572743991218482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76604d8bd3050c15d950e4295eb30cc6,},Annota
tions:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7503e9373d138e5d2e23128934a5da5fd17cde8052cfdc2ccb8ea63ef43b5d37,PodSandboxId:a7ec7ade955ff3362448ed7381df73019003e38d57dc42f24b3e4dffda16cff2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713572742041253686,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s87k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0820561f-f794-4ac5-8ce2-ae0cb4310c3e,},Annotations:map[string]string{io.kubernet
es.container.hash: dd367de8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee362bb57c39b48e30473ee01be65a12508f89000c04664e9d4cb00eead48881,PodSandboxId:2952502d79ed7046fb6c936e2cdcaac06d274a1af6bb0f72625bb9c7849a53af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713572256398294941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f8cxz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c53b85d0-fb09-4f4a-994b-650454a591e9,},Annotations:map[string]string{io.kubernetes
.container.hash: 91975a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf,PodSandboxId:96b6f46faf7987503503c406f518a352cf828470aaa2857fdc4e9580eee7d3ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713572112401908790,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hc82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279d40d8-eb21-476c-ba36-bc7592777126,},Annotations:map[string]string{io.kubernetes.container.hash: ee84443e,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c,PodSandboxId:6951735c94141fbea313e44ff72fab10529f03b1ba6dc664543c35ed8b0e7c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713572112310427901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-jvvpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 104d5328-1f6a-4747-8e26-9a98e38dc1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 77f4f648,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c,PodSandboxId:78d8eb3f68b710cf8ae3ebc45873b48e07019b5e4d7efd0b56e62a4513be110c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431f
ceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713572108700751072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw62l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad72bfc-65c2-4007-9d5c-682ddf48c44d,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4f593f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0,PodSandboxId:6b52fa6b93c1b7e8f8537088635da6d0cb7b5bb9091002379c8f7b848af01e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691
a75a899,State:CONTAINER_EXITED,CreatedAt:1713572087052357246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7ef9202f47a99f44c4ee1b49d3476fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7b5c549,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9,PodSandboxId:6c0d855406f87897ca0924505087fcfdf3cb0d5eaf2fcde6c237b42f6d3ffd82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:17
13572086953871804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-371738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf0f7783323c0e2283af9616002946f,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=712f9d13-750f-4e38-b359-e4929f43a755 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ac855ed56c2e4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       5                   1b488473663a9       storage-provisioner
	6336035c08e95       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               3                   6176436eca7ff       kindnet-s87k2
	919081b91e5bb       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      4 minutes ago       Running             kube-controller-manager   2                   9ab77018e92d9       kube-controller-manager-ha-371738
	853b9a7bfac3a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   36cad30ca97d8       busybox-fc5497c4f-f8cxz
	c9ef31e00ee2c       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      4 minutes ago       Running             kube-apiserver            3                   8b198b23f7183       kube-apiserver-ha-371738
	d3eebe566cbfa       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      4 minutes ago       Running             kube-vip                  0                   20793aefea29b       kube-vip-ha-371738
	988e0e16ce68f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   0ffa8680c564a       coredns-7db6d8ff4d-9hc82
	8657d9bf44d96       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   0e08629c20763       coredns-7db6d8ff4d-jvvpr
	9744a7e64b6df       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       4                   1b488473663a9       storage-provisioner
	c79aa579b0cae       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      5 minutes ago       Running             kube-proxy                1                   e56329660a79e       kube-proxy-zw62l
	b501a33161b99       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      5 minutes ago       Running             kube-scheduler            1                   077e956a7dccc       kube-scheduler-ha-371738
	14e36bfb114f2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   3fae8202a1068       etcd-ha-371738
	97b8c9a163319       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      5 minutes ago       Exited              kube-apiserver            2                   8b198b23f7183       kube-apiserver-ha-371738
	323c8a3e2bb2e       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      5 minutes ago       Exited              kube-controller-manager   1                   9ab77018e92d9       kube-controller-manager-ha-371738
	7503e9373d138       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   a7ec7ade955ff       kindnet-s87k2
	ee362bb57c39b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   2952502d79ed7       busybox-fc5497c4f-f8cxz
	0895fff8b18b0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   96b6f46faf798       coredns-7db6d8ff4d-9hc82
	a8223d8428849       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   6951735c94141       coredns-7db6d8ff4d-jvvpr
	484faebf3e657       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      15 minutes ago      Exited              kube-proxy                0                   78d8eb3f68b71       kube-proxy-zw62l
	c7bfd34cee24c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   6b52fa6b93c1b       etcd-ha-371738
	c9112b9048168       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      16 minutes ago      Exited              kube-scheduler            0                   6c0d855406f87       kube-scheduler-ha-371738
	
	
	==> coredns [0895fff8b18b0ab113410d68f08119219ee8ddff8716152d1171759a103858cf] <==
	[INFO] 10.244.0.4:60826 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142804s
	[INFO] 10.244.1.2:55654 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142714s
	[INFO] 10.244.1.2:34889 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117318s
	[INFO] 10.244.1.2:45674 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142204s
	[INFO] 10.244.1.2:43577 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088578s
	[INFO] 10.244.1.2:36740 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123852s
	[INFO] 10.244.1.2:57454 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168492s
	[INFO] 10.244.2.2:49398 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205465s
	[INFO] 10.244.2.2:48930 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221231s
	[INFO] 10.244.2.2:42052 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108139s
	[INFO] 10.244.0.4:40360 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000213257s
	[INFO] 10.244.0.4:54447 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081534s
	[INFO] 10.244.1.2:40715 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185061s
	[INFO] 10.244.1.2:45537 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165941s
	[INFO] 10.244.1.2:38158 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132179s
	[INFO] 10.244.2.2:42970 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000371127s
	[INFO] 10.244.2.2:50230 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000172364s
	[INFO] 10.244.0.4:51459 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000058901s
	[INFO] 10.244.0.4:59988 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131476s
	[INFO] 10.244.1.2:56359 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140553s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8657d9bf44d968fb405af2d73a04c2887cf209c19811cc20256b9f4e6230c71a] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44488->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1521942851]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 00:25:57.193) (total time: 10165ms):
	Trace[1521942851]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44486->10.96.0.1:443: read: connection reset by peer 10164ms (00:26:07.358)
	Trace[1521942851]: [10.165002909s] [10.165002909s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44488->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44486->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[591503224]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 00:26:17.833) (total time: 10002ms):
	Trace[591503224]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (00:26:27.835)
	Trace[591503224]: [10.002384233s] [10.002384233s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [988e0e16ce68f3643b9af0d65ea12352ab0639e773ab1352b0a8151eb51f8626] <==
	[INFO] plugin/kubernetes: Trace[207359994]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 00:25:50.093) (total time: 10001ms):
	Trace[207359994]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:26:00.095)
	Trace[207359994]: [10.001948631s] [10.001948631s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1794704336]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 00:25:54.166) (total time: 10001ms):
	Trace[1794704336]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:26:04.168)
	Trace[1794704336]: [10.001569909s] [10.001569909s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:34696->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:34696->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34682->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34682->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a8223d8428849d18cab4805c366d0f6f38f9df7362a2b825582a37627b5cee6c] <==
	[INFO] 10.244.2.2:59691 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134121s
	[INFO] 10.244.0.4:54126 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001623235s
	[INFO] 10.244.0.4:42647 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000295581s
	[INFO] 10.244.0.4:47843 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001168377s
	[INFO] 10.244.0.4:59380 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132829s
	[INFO] 10.244.0.4:59464 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000032892s
	[INFO] 10.244.0.4:52319 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063642s
	[INFO] 10.244.1.2:41188 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001744808s
	[INFO] 10.244.1.2:56595 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001214481s
	[INFO] 10.244.2.2:57639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000180873s
	[INFO] 10.244.0.4:57748 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177324s
	[INFO] 10.244.0.4:49496 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076032s
	[INFO] 10.244.1.2:36655 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131976s
	[INFO] 10.244.2.2:37462 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221492s
	[INFO] 10.244.2.2:58605 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000186595s
	[INFO] 10.244.0.4:34556 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191452s
	[INFO] 10.244.0.4:53073 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000345299s
	[INFO] 10.244.1.2:38241 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000181093s
	[INFO] 10.244.1.2:59304 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000166312s
	[INFO] 10.244.1.2:50151 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139637s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1838&timeout=6m43s&timeoutSeconds=403&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1838&timeout=5m9s&timeoutSeconds=309&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1838&timeout=5m17s&timeoutSeconds=317&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-371738
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-371738
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-371738
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T00_14_57_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:14:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-371738
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:30:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:26:33 +0000   Sat, 20 Apr 2024 00:14:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:26:33 +0000   Sat, 20 Apr 2024 00:14:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:26:33 +0000   Sat, 20 Apr 2024 00:14:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:26:33 +0000   Sat, 20 Apr 2024 00:15:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-371738
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 74609fff13e94a48ba74bd0fc50a4818
	  System UUID:                74609fff-13e9-4a48-ba74-bd0fc50a4818
	  Boot ID:                    2adb72ca-aae0-452d-9d86-779c19923b8a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-f8cxz              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-9hc82             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-jvvpr             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-371738                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-s87k2                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-371738             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-371738    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-zw62l                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-371738             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-371738                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m19s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m21s  kube-proxy       
	  Normal   Starting                 15m    kube-proxy       
	  Normal   Starting                 15m    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    15m    kubelet          Node ha-371738 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  15m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m    kubelet          Node ha-371738 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m    kubelet          Node ha-371738 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m    node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	  Normal   NodeReady                15m    kubelet          Node ha-371738 status is now: NodeReady
	  Normal   RegisteredNode           14m    node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	  Normal   RegisteredNode           13m    node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	  Warning  ContainerGCFailed        5m55s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m9s   node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	  Normal   RegisteredNode           4m8s   node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	  Normal   RegisteredNode           3m15s  node-controller  Node ha-371738 event: Registered Node ha-371738 in Controller
	
	
	Name:               ha-371738-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-371738-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-371738
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_20T00_16_02_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:15:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-371738-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:30:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:27:22 +0000   Sat, 20 Apr 2024 00:26:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:27:22 +0000   Sat, 20 Apr 2024 00:26:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:27:22 +0000   Sat, 20 Apr 2024 00:26:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:27:22 +0000   Sat, 20 Apr 2024 00:26:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-371738-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e23e7a13fe24abd8986bea706ca80e3
	  System UUID:                4e23e7a1-3fe2-4abd-8986-bea706ca80e3
	  Boot ID:                    bc9fbe65-d0b4-4673-b35c-703e2f7e1f06
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-j7g5h                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-371738-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-ggw7f                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-371738-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-371738-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-59wls                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-371738-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-371738-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m12s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-371738-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-371738-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-371738-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-371738-m02 status is now: NodeNotReady
	  Normal  Starting                 4m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m41s (x8 over 4m41s)  kubelet          Node ha-371738-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s (x8 over 4m41s)  kubelet          Node ha-371738-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s (x7 over 4m41s)  kubelet          Node ha-371738-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-371738-m02 event: Registered Node ha-371738-m02 in Controller
	
	
	Name:               ha-371738-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-371738-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-371738
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_20T00_18_15_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:18:14 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-371738-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:28:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 20 Apr 2024 00:28:02 +0000   Sat, 20 Apr 2024 00:29:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 20 Apr 2024 00:28:02 +0000   Sat, 20 Apr 2024 00:29:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 20 Apr 2024 00:28:02 +0000   Sat, 20 Apr 2024 00:29:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 20 Apr 2024 00:28:02 +0000   Sat, 20 Apr 2024 00:29:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    ha-371738-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 236dccfc477f4e3db2ca80077dc2160d
	  System UUID:                236dccfc-477f-4e3d-b2ca-80077dc2160d
	  Boot ID:                    68dd9360-905d-4c49-b8f5-3ad8f692d4cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2j254    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-zsn9n              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-7fn2b           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x4 over 12m)      kubelet          Node ha-371738-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x4 over 12m)      kubelet          Node ha-371738-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x4 over 12m)      kubelet          Node ha-371738-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-371738-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m9s                   node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal   RegisteredNode           4m8s                   node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal   NodeNotReady             3m29s                  node-controller  Node ha-371738-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m15s                  node-controller  Node ha-371738-m04 event: Registered Node ha-371738-m04 in Controller
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m49s (x3 over 2m49s)  kubelet          Node ha-371738-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x3 over 2m49s)  kubelet          Node ha-371738-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x3 over 2m49s)  kubelet          Node ha-371738-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m49s (x2 over 2m49s)  kubelet          Node ha-371738-m04 has been rebooted, boot id: 68dd9360-905d-4c49-b8f5-3ad8f692d4cb
	  Normal   NodeReady                2m49s (x2 over 2m49s)  kubelet          Node ha-371738-m04 status is now: NodeReady
	  Normal   NodeNotReady             108s                   node-controller  Node ha-371738-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.470452] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.056643] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066813] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.173842] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.129751] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.277871] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.788058] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.061136] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.194311] systemd-fstab-generator[953]: Ignoring "noauto" option for root device
	[  +1.186377] kauditd_printk_skb: 57 callbacks suppressed
	[  +8.916528] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[  +0.094324] kauditd_printk_skb: 40 callbacks suppressed
	[Apr20 00:15] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.572775] kauditd_printk_skb: 72 callbacks suppressed
	[Apr20 00:22] kauditd_printk_skb: 1 callbacks suppressed
	[Apr20 00:25] systemd-fstab-generator[3800]: Ignoring "noauto" option for root device
	[  +0.218476] systemd-fstab-generator[3830]: Ignoring "noauto" option for root device
	[  +0.227584] systemd-fstab-generator[3864]: Ignoring "noauto" option for root device
	[  +0.183646] systemd-fstab-generator[3886]: Ignoring "noauto" option for root device
	[  +0.319469] systemd-fstab-generator[3936]: Ignoring "noauto" option for root device
	[  +2.059096] systemd-fstab-generator[4602]: Ignoring "noauto" option for root device
	[  +3.382447] kauditd_printk_skb: 231 callbacks suppressed
	[Apr20 00:26] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [14e36bfb114f2bd2d7fc4262b41df0df3a85d79e4c6a533577e909a0e46e0a80] <==
	{"level":"info","ts":"2024-04-20T00:27:18.020243Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:27:18.03757Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a09c9983ac28f1fd","to":"119fe9e65aa8addc","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-20T00:27:18.037646Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:27:18.039987Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:27:18.045431Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a09c9983ac28f1fd","to":"119fe9e65aa8addc","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-20T00:27:18.04554Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"warn","ts":"2024-04-20T00:28:05.966985Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.981442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-59wls\" ","response":"range_response_count:1 size:4592"}
	{"level":"info","ts":"2024-04-20T00:28:05.967396Z","caller":"traceutil/trace.go:171","msg":"trace[2058211705] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-59wls; range_end:; response_count:1; response_revision:2462; }","duration":"101.449447ms","start":"2024-04-20T00:28:05.865909Z","end":"2024-04-20T00:28:05.967359Z","steps":["trace[2058211705] 'range keys from in-memory index tree'  (duration: 99.410969ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:28:16.448867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd switched to configuration voters=(11573293933243462141 13613591437690041669)"}
	{"level":"info","ts":"2024-04-20T00:28:16.451388Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"8f39477865362797","local-member-id":"a09c9983ac28f1fd","removed-remote-peer-id":"119fe9e65aa8addc","removed-remote-peer-urls":["https://192.168.39.253:2380"]}
	{"level":"info","ts":"2024-04-20T00:28:16.451499Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"warn","ts":"2024-04-20T00:28:16.453455Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:28:16.453533Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"warn","ts":"2024-04-20T00:28:16.45399Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:28:16.454054Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:28:16.454187Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"warn","ts":"2024-04-20T00:28:16.45442Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc","error":"context canceled"}
	{"level":"warn","ts":"2024-04-20T00:28:16.454576Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"119fe9e65aa8addc","error":"failed to read 119fe9e65aa8addc on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-20T00:28:16.454652Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"warn","ts":"2024-04-20T00:28:16.454881Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc","error":"context canceled"}
	{"level":"info","ts":"2024-04-20T00:28:16.454988Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:28:16.455047Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:28:16.455066Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"a09c9983ac28f1fd","removed-remote-peer-id":"119fe9e65aa8addc"}
	{"level":"warn","ts":"2024-04-20T00:28:16.465813Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"a09c9983ac28f1fd","remote-peer-id-stream-handler":"a09c9983ac28f1fd","remote-peer-id-from":"119fe9e65aa8addc"}
	{"level":"warn","ts":"2024-04-20T00:28:16.474042Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"a09c9983ac28f1fd","remote-peer-id-stream-handler":"a09c9983ac28f1fd","remote-peer-id-from":"119fe9e65aa8addc"}
	
	
	==> etcd [c7bfd34cee24c110efd9abc96611808a11c4259907fab042496c28923d6b9ac0] <==
	2024/04/20 00:24:10 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-20T00:24:10.169977Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:24:09.294363Z","time spent":"875.609388ms","remote":"127.0.0.1:46986","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":0,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:10000 "}
	2024/04/20 00:24:10 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-20T00:24:10.170024Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:24:09.184491Z","time spent":"985.528793ms","remote":"127.0.0.1:46854","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:500 "}
	2024/04/20 00:24:10 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-20T00:24:10.170068Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:24:09.281831Z","time spent":"888.233349ms","remote":"127.0.0.1:46588","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:10000 "}
	2024/04/20 00:24:10 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-20T00:24:10.191324Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a09c9983ac28f1fd","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-20T00:24:10.191932Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:24:10.191977Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:24:10.192053Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:24:10.192294Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:24:10.19237Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:24:10.192407Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:24:10.192417Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"119fe9e65aa8addc"}
	{"level":"info","ts":"2024-04-20T00:24:10.192423Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"bced3148e0d07545"}
	{"level":"info","ts":"2024-04-20T00:24:10.192436Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"bced3148e0d07545"}
	{"level":"info","ts":"2024-04-20T00:24:10.192484Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"bced3148e0d07545"}
	{"level":"info","ts":"2024-04-20T00:24:10.192523Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545"}
	{"level":"info","ts":"2024-04-20T00:24:10.192576Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545"}
	{"level":"info","ts":"2024-04-20T00:24:10.19266Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"bced3148e0d07545"}
	{"level":"info","ts":"2024-04-20T00:24:10.192671Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"bced3148e0d07545"}
	{"level":"info","ts":"2024-04-20T00:24:10.196056Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-04-20T00:24:10.196281Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-04-20T00:24:10.196317Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-371738","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	
	
	==> kernel <==
	 00:30:51 up 16 min,  0 users,  load average: 0.54, 0.67, 0.42
	Linux ha-371738 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6336035c08e95e415f73490de9a3dab8a520c846415666117bb1e7e4ff497e1d] <==
	I0420 00:30:09.839808       1 main.go:250] Node ha-371738-m04 has CIDR [10.244.3.0/24] 
	I0420 00:30:19.852082       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0420 00:30:19.852348       1 main.go:227] handling current node
	I0420 00:30:19.852365       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0420 00:30:19.852379       1 main.go:250] Node ha-371738-m02 has CIDR [10.244.1.0/24] 
	I0420 00:30:19.852558       1 main.go:223] Handling node with IPs: map[192.168.39.61:{}]
	I0420 00:30:19.852599       1 main.go:250] Node ha-371738-m04 has CIDR [10.244.3.0/24] 
	I0420 00:30:29.858928       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0420 00:30:29.858979       1 main.go:227] handling current node
	I0420 00:30:29.858990       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0420 00:30:29.858995       1 main.go:250] Node ha-371738-m02 has CIDR [10.244.1.0/24] 
	I0420 00:30:29.859207       1 main.go:223] Handling node with IPs: map[192.168.39.61:{}]
	I0420 00:30:29.859251       1 main.go:250] Node ha-371738-m04 has CIDR [10.244.3.0/24] 
	I0420 00:30:39.874323       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0420 00:30:39.874369       1 main.go:227] handling current node
	I0420 00:30:39.874385       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0420 00:30:39.874391       1 main.go:250] Node ha-371738-m02 has CIDR [10.244.1.0/24] 
	I0420 00:30:39.874494       1 main.go:223] Handling node with IPs: map[192.168.39.61:{}]
	I0420 00:30:39.874527       1 main.go:250] Node ha-371738-m04 has CIDR [10.244.3.0/24] 
	I0420 00:30:49.894268       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0420 00:30:49.894497       1 main.go:227] handling current node
	I0420 00:30:49.894584       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0420 00:30:49.894641       1 main.go:250] Node ha-371738-m02 has CIDR [10.244.1.0/24] 
	I0420 00:30:49.894813       1 main.go:223] Handling node with IPs: map[192.168.39.61:{}]
	I0420 00:30:49.894879       1 main.go:250] Node ha-371738-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [7503e9373d138e5d2e23128934a5da5fd17cde8052cfdc2ccb8ea63ef43b5d37] <==
	I0420 00:25:42.498821       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0420 00:25:42.498869       1 main.go:107] hostIP = 192.168.39.217
	podIP = 192.168.39.217
	I0420 00:25:42.499040       1 main.go:116] setting mtu 1500 for CNI 
	I0420 00:25:42.499058       1 main.go:146] kindnetd IP family: "ipv4"
	I0420 00:25:42.499080       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	
	
	==> kube-apiserver [97b8c9a163319f08eb69e441dc04e555623c9a6fef77426e633b17dfe6ca7748] <==
	I0420 00:25:44.967586       1 options.go:221] external host was not specified, using 192.168.39.217
	I0420 00:25:44.978354       1 server.go:148] Version: v1.30.0
	I0420 00:25:44.978421       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:25:46.259612       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0420 00:25:46.262747       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0420 00:25:46.266481       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0420 00:25:46.266576       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0420 00:25:46.266751       1 instance.go:299] Using reconciler: lease
	W0420 00:26:06.257471       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0420 00:26:06.257471       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0420 00:26:06.267793       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [c9ef31e00ee2c05d749141026c75ec4447f79a380ae00cfbe806380a29e63c58] <==
	I0420 00:26:29.459431       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0420 00:26:29.494913       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0420 00:26:29.496394       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0420 00:26:29.502060       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0420 00:26:29.502193       1 policy_source.go:224] refreshing policies
	I0420 00:26:29.502848       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0420 00:26:29.506158       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0420 00:26:29.521605       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0420 00:26:29.566924       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0420 00:26:29.567000       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0420 00:26:29.567011       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0420 00:26:29.569719       1 shared_informer.go:320] Caches are synced for configmaps
	I0420 00:26:29.576271       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0420 00:26:29.576375       1 aggregator.go:165] initial CRD sync complete...
	I0420 00:26:29.576425       1 autoregister_controller.go:141] Starting autoregister controller
	I0420 00:26:29.576450       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0420 00:26:29.576478       1 cache.go:39] Caches are synced for autoregister controller
	W0420 00:26:29.759980       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.253]
	I0420 00:26:29.761834       1 controller.go:615] quota admission added evaluator for: endpoints
	I0420 00:26:29.776442       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0420 00:26:29.786723       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0420 00:26:30.376843       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0420 00:26:30.941671       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.253]
	W0420 00:26:50.916417       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.48]
	W0420 00:28:30.926234       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.48]
	
	
	==> kube-controller-manager [323c8a3e2bb2ea2ed1ecc1b2b0394c0a2f8bd196950bfff76a8d5d6292d348bb] <==
	I0420 00:25:46.397204       1 serving.go:380] Generated self-signed cert in-memory
	I0420 00:25:47.085496       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0420 00:25:47.085547       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:25:47.087799       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0420 00:25:47.089314       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0420 00:25:47.090991       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0420 00:25:47.091744       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0420 00:26:07.274932       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.217:8443/healthz\": dial tcp 192.168.39.217:8443: connect: connection refused"
	
	
	==> kube-controller-manager [919081b91e5bbc00fd283a7cc9a6268f1e14b692b56b061e2b21b046a9580fd9] <==
	E0420 00:29:02.101045       1 gc_controller.go:153] "Failed to get node" err="node \"ha-371738-m03\" not found" logger="pod-garbage-collector-controller" node="ha-371738-m03"
	E0420 00:29:02.101075       1 gc_controller.go:153] "Failed to get node" err="node \"ha-371738-m03\" not found" logger="pod-garbage-collector-controller" node="ha-371738-m03"
	E0420 00:29:02.101200       1 gc_controller.go:153] "Failed to get node" err="node \"ha-371738-m03\" not found" logger="pod-garbage-collector-controller" node="ha-371738-m03"
	E0420 00:29:02.101227       1 gc_controller.go:153] "Failed to get node" err="node \"ha-371738-m03\" not found" logger="pod-garbage-collector-controller" node="ha-371738-m03"
	I0420 00:29:04.015625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.11247ms"
	I0420 00:29:04.016359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.387µs"
	E0420 00:29:22.102171       1 gc_controller.go:153] "Failed to get node" err="node \"ha-371738-m03\" not found" logger="pod-garbage-collector-controller" node="ha-371738-m03"
	E0420 00:29:22.102225       1 gc_controller.go:153] "Failed to get node" err="node \"ha-371738-m03\" not found" logger="pod-garbage-collector-controller" node="ha-371738-m03"
	E0420 00:29:22.102234       1 gc_controller.go:153] "Failed to get node" err="node \"ha-371738-m03\" not found" logger="pod-garbage-collector-controller" node="ha-371738-m03"
	E0420 00:29:22.102239       1 gc_controller.go:153] "Failed to get node" err="node \"ha-371738-m03\" not found" logger="pod-garbage-collector-controller" node="ha-371738-m03"
	E0420 00:29:22.102244       1 gc_controller.go:153] "Failed to get node" err="node \"ha-371738-m03\" not found" logger="pod-garbage-collector-controller" node="ha-371738-m03"
	I0420 00:29:22.115840       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-371738-m03"
	I0420 00:29:22.145040       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-371738-m03"
	I0420 00:29:22.145150       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-924z9"
	I0420 00:29:22.177518       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-924z9"
	I0420 00:29:22.177913       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-ph4sb"
	I0420 00:29:22.209542       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-ph4sb"
	I0420 00:29:22.209589       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-371738-m03"
	I0420 00:29:22.234792       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-371738-m03"
	I0420 00:29:22.234840       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-371738-m03"
	I0420 00:29:22.270169       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-371738-m03"
	I0420 00:29:22.270288       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-371738-m03"
	I0420 00:29:22.301006       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-371738-m03"
	I0420 00:29:22.301297       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-371738-m03"
	I0420 00:29:22.330888       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-371738-m03"
	
	
	==> kube-proxy [484faebf3e657827d0455c913b4b0123dd3ab0b706dbfdb14bcebe6185bae26c] <==
	E0420 00:23:00.287313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:03.357409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:03.357476       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:03.357414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:03.357508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:03.357667       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:03.357747       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:09.950893       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:09.951014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:09.951052       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:09.951175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:09.951301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:09.951349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:19.167078       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:19.167264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:22.238483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:22.238802       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:25.309826       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:25.309996       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:40.670248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:40.670430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:40.670552       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:40.670590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:23:46.817557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:23:46.817621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [c79aa579b0cae6036a46a7b1aaa75e08bbf35641467073d52465b5f88d81b40d] <==
	I0420 00:26:29.692931       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 00:26:29.693051       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 00:26:29.693076       1 server_linux.go:165] "Using iptables Proxier"
	I0420 00:26:29.697864       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 00:26:29.698290       1 server.go:872] "Version info" version="v1.30.0"
	I0420 00:26:29.698378       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:26:29.700198       1 config.go:192] "Starting service config controller"
	I0420 00:26:29.700255       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 00:26:29.700287       1 config.go:101] "Starting endpoint slice config controller"
	I0420 00:26:29.700291       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 00:26:29.701288       1 config.go:319] "Starting node config controller"
	I0420 00:26:29.701328       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0420 00:26:32.702047       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0420 00:26:32.702418       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:26:32.702615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-371738&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:26:32.702771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:26:32.703073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0420 00:26:32.703309       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0420 00:26:32.703928       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0420 00:26:34.101223       1 shared_informer.go:320] Caches are synced for service config
	I0420 00:26:34.201351       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 00:26:34.201387       1 shared_informer.go:320] Caches are synced for node config
	W0420 00:29:15.946371       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0420 00:29:15.946700       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0420 00:29:15.946768       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [b501a33161b99652e0e199689a5c78dd689f7e56b62656760965fdca22ec9e6f] <==
	E0420 00:26:27.047486       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.217:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	W0420 00:26:29.411683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0420 00:26:29.414213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0420 00:26:29.414407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0420 00:26:29.414450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0420 00:26:29.414555       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 00:26:29.414594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0420 00:26:29.414690       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 00:26:29.414721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 00:26:29.414800       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0420 00:26:29.414830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0420 00:26:29.414890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0420 00:26:29.414916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0420 00:26:29.414994       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0420 00:26:29.415023       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0420 00:26:29.415204       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 00:26:29.415248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 00:26:29.415330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0420 00:26:29.415371       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0420 00:26:29.415434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0420 00:26:29.415463       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0420 00:26:29.581268       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0420 00:28:15.011730       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-2j254\": pod busybox-fc5497c4f-2j254 is already assigned to node \"ha-371738-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-2j254" node="ha-371738-m04"
	E0420 00:28:15.012234       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-2j254\": pod busybox-fc5497c4f-2j254 is already assigned to node \"ha-371738-m04\"" pod="default/busybox-fc5497c4f-2j254"
	I0420 00:28:15.012262       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-2j254" node="ha-371738-m04"
	
	
	==> kube-scheduler [c9112b9048168b933667c9c4732dd41fe575f0dfc84b45fcc82ef29b6f77b6e9] <==
	W0420 00:24:07.927764       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0420 00:24:07.927871       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0420 00:24:08.270381       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 00:24:08.270539       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 00:24:08.364355       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0420 00:24:08.364441       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0420 00:24:08.811724       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 00:24:08.811918       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 00:24:09.040321       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 00:24:09.040381       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 00:24:09.172716       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0420 00:24:09.173228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0420 00:24:09.344711       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0420 00:24:09.344741       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0420 00:24:09.404484       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0420 00:24:09.404540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0420 00:24:09.445000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0420 00:24:09.445144       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0420 00:24:09.570769       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0420 00:24:09.570828       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0420 00:24:09.707809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0420 00:24:09.707862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0420 00:24:09.737375       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 00:24:09.737434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 00:24:10.126797       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 20 00:26:56 ha-371738 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:26:56 ha-371738 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:26:56 ha-371738 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:26:56 ha-371738 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:27:07 ha-371738 kubelet[1375]: I0420 00:27:07.998679    1375 scope.go:117] "RemoveContainer" containerID="9744a7e64b6dfe1c7ddf060cff8b03b0cf3023d30edce9ba4525f223b1cd0b94"
	Apr 20 00:27:07 ha-371738 kubelet[1375]: E0420 00:27:07.999047    1375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1d7b89d3-7cff-4258-8215-819971fa1b81)\"" pod="kube-system/storage-provisioner" podUID="1d7b89d3-7cff-4258-8215-819971fa1b81"
	Apr 20 00:27:22 ha-371738 kubelet[1375]: I0420 00:27:22.997574    1375 scope.go:117] "RemoveContainer" containerID="9744a7e64b6dfe1c7ddf060cff8b03b0cf3023d30edce9ba4525f223b1cd0b94"
	Apr 20 00:27:31 ha-371738 kubelet[1375]: I0420 00:27:31.997991    1375 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-371738" podUID="8d162382-25bb-4393-8c45-a8487b571605"
	Apr 20 00:27:32 ha-371738 kubelet[1375]: I0420 00:27:32.023654    1375 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-371738"
	Apr 20 00:27:36 ha-371738 kubelet[1375]: I0420 00:27:36.021222    1375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-371738" podStartSLOduration=4.021079157 podStartE2EDuration="4.021079157s" podCreationTimestamp="2024-04-20 00:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-20 00:27:36.01991427 +0000 UTC m=+760.157759244" watchObservedRunningTime="2024-04-20 00:27:36.021079157 +0000 UTC m=+760.158924140"
	Apr 20 00:27:56 ha-371738 kubelet[1375]: E0420 00:27:56.022588    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:27:56 ha-371738 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:27:56 ha-371738 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:27:56 ha-371738 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:27:56 ha-371738 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:28:56 ha-371738 kubelet[1375]: E0420 00:28:56.015366    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:28:56 ha-371738 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:28:56 ha-371738 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:28:56 ha-371738 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:28:56 ha-371738 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:29:56 ha-371738 kubelet[1375]: E0420 00:29:56.015930    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:29:56 ha-371738 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:29:56 ha-371738 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:29:56 ha-371738 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:29:56 ha-371738 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 00:30:50.185292  103201 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18703-76456/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-371738 -n ha-371738
helpers_test.go:261: (dbg) Run:  kubectl --context ha-371738 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (305.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-059001
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-059001
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-059001: exit status 82 (2m2.707104411s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-059001-m03"  ...
	* Stopping node "multinode-059001-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-059001" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-059001 --wait=true -v=8 --alsologtostderr
E0420 00:48:11.657936   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:50:27.814728   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-059001 --wait=true -v=8 --alsologtostderr: (3m0.621304289s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-059001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-059001 -n multinode-059001
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-059001 logs -n 25: (1.671404572s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-059001 ssh -n                                                                 | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-059001 cp multinode-059001-m02:/home/docker/cp-test.txt                       | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2465559633/001/cp-test_multinode-059001-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n                                                                 | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-059001 cp multinode-059001-m02:/home/docker/cp-test.txt                       | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001:/home/docker/cp-test_multinode-059001-m02_multinode-059001.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n                                                                 | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n multinode-059001 sudo cat                                       | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | /home/docker/cp-test_multinode-059001-m02_multinode-059001.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-059001 cp multinode-059001-m02:/home/docker/cp-test.txt                       | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m03:/home/docker/cp-test_multinode-059001-m02_multinode-059001-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n                                                                 | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n multinode-059001-m03 sudo cat                                   | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | /home/docker/cp-test_multinode-059001-m02_multinode-059001-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-059001 cp testdata/cp-test.txt                                                | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n                                                                 | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-059001 cp multinode-059001-m03:/home/docker/cp-test.txt                       | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2465559633/001/cp-test_multinode-059001-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n                                                                 | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-059001 cp multinode-059001-m03:/home/docker/cp-test.txt                       | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001:/home/docker/cp-test_multinode-059001-m03_multinode-059001.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n                                                                 | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n multinode-059001 sudo cat                                       | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | /home/docker/cp-test_multinode-059001-m03_multinode-059001.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-059001 cp multinode-059001-m03:/home/docker/cp-test.txt                       | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m02:/home/docker/cp-test_multinode-059001-m03_multinode-059001-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n                                                                 | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n multinode-059001-m02 sudo cat                                   | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | /home/docker/cp-test_multinode-059001-m03_multinode-059001-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-059001 node stop m03                                                          | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	| node    | multinode-059001 node start                                                             | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-059001                                                                | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC |                     |
	| stop    | -p multinode-059001                                                                     | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC |                     |
	| start   | -p multinode-059001                                                                     | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:47 UTC | 20 Apr 24 00:50 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-059001                                                                | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 00:47:57
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 00:47:57.013195  112536 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:47:57.013289  112536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:47:57.013299  112536 out.go:304] Setting ErrFile to fd 2...
	I0420 00:47:57.013303  112536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:47:57.013537  112536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:47:57.014076  112536 out.go:298] Setting JSON to false
	I0420 00:47:57.014958  112536 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":12624,"bootTime":1713561453,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 00:47:57.015010  112536 start.go:139] virtualization: kvm guest
	I0420 00:47:57.017488  112536 out.go:177] * [multinode-059001] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 00:47:57.019019  112536 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 00:47:57.020532  112536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 00:47:57.019028  112536 notify.go:220] Checking for updates...
	I0420 00:47:57.022226  112536 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 00:47:57.023628  112536 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:47:57.024855  112536 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 00:47:57.025999  112536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 00:47:57.027641  112536 config.go:182] Loaded profile config "multinode-059001": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:47:57.027764  112536 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 00:47:57.028378  112536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:47:57.028424  112536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:47:57.043243  112536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I0420 00:47:57.043784  112536 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:47:57.044427  112536 main.go:141] libmachine: Using API Version  1
	I0420 00:47:57.044449  112536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:47:57.044820  112536 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:47:57.044983  112536 main.go:141] libmachine: (multinode-059001) Calling .DriverName
	I0420 00:47:57.079002  112536 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 00:47:57.080323  112536 start.go:297] selected driver: kvm2
	I0420 00:47:57.080337  112536 start.go:901] validating driver "kvm2" against &{Name:multinode-059001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-059001 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.108 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio
-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:47:57.080522  112536 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 00:47:57.080963  112536 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 00:47:57.081049  112536 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 00:47:57.095226  112536 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 00:47:57.095844  112536 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:47:57.095913  112536 cni.go:84] Creating CNI manager for ""
	I0420 00:47:57.095930  112536 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0420 00:47:57.096002  112536 start.go:340] cluster config:
	{Name:multinode-059001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-059001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.108 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:fals
e metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:47:57.096162  112536 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 00:47:57.097860  112536 out.go:177] * Starting "multinode-059001" primary control-plane node in "multinode-059001" cluster
	I0420 00:47:57.099202  112536 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:47:57.099245  112536 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0420 00:47:57.099259  112536 cache.go:56] Caching tarball of preloaded images
	I0420 00:47:57.099357  112536 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 00:47:57.099373  112536 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 00:47:57.099483  112536 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/config.json ...
	I0420 00:47:57.099685  112536 start.go:360] acquireMachinesLock for multinode-059001: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 00:47:57.099729  112536 start.go:364] duration metric: took 24.489µs to acquireMachinesLock for "multinode-059001"
	I0420 00:47:57.099743  112536 start.go:96] Skipping create...Using existing machine configuration
	I0420 00:47:57.099750  112536 fix.go:54] fixHost starting: 
	I0420 00:47:57.100049  112536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:47:57.100085  112536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:47:57.113268  112536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44351
	I0420 00:47:57.113791  112536 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:47:57.114342  112536 main.go:141] libmachine: Using API Version  1
	I0420 00:47:57.114366  112536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:47:57.114628  112536 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:47:57.114806  112536 main.go:141] libmachine: (multinode-059001) Calling .DriverName
	I0420 00:47:57.115010  112536 main.go:141] libmachine: (multinode-059001) Calling .GetState
	I0420 00:47:57.116540  112536 fix.go:112] recreateIfNeeded on multinode-059001: state=Running err=<nil>
	W0420 00:47:57.116558  112536 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 00:47:57.118350  112536 out.go:177] * Updating the running kvm2 "multinode-059001" VM ...
	I0420 00:47:57.119691  112536 machine.go:94] provisionDockerMachine start ...
	I0420 00:47:57.119713  112536 main.go:141] libmachine: (multinode-059001) Calling .DriverName
	I0420 00:47:57.119901  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:47:57.122271  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.122660  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:47:57.122696  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.122823  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:47:57.122985  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.123151  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.123290  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:47:57.123450  112536 main.go:141] libmachine: Using SSH client type: native
	I0420 00:47:57.123694  112536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0420 00:47:57.123708  112536 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 00:47:57.230886  112536 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-059001
	
	I0420 00:47:57.230927  112536 main.go:141] libmachine: (multinode-059001) Calling .GetMachineName
	I0420 00:47:57.231179  112536 buildroot.go:166] provisioning hostname "multinode-059001"
	I0420 00:47:57.231214  112536 main.go:141] libmachine: (multinode-059001) Calling .GetMachineName
	I0420 00:47:57.231417  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:47:57.234084  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.234411  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:47:57.234443  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.234575  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:47:57.234753  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.234901  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.235080  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:47:57.235266  112536 main.go:141] libmachine: Using SSH client type: native
	I0420 00:47:57.235445  112536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0420 00:47:57.235463  112536 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-059001 && echo "multinode-059001" | sudo tee /etc/hostname
	I0420 00:47:57.359531  112536 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-059001
	
	I0420 00:47:57.359556  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:47:57.362264  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.362637  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:47:57.362749  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.362912  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:47:57.363125  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.363295  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.363442  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:47:57.363602  112536 main.go:141] libmachine: Using SSH client type: native
	I0420 00:47:57.363813  112536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0420 00:47:57.363837  112536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-059001' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-059001/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-059001' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 00:47:57.470936  112536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:47:57.470964  112536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 00:47:57.470992  112536 buildroot.go:174] setting up certificates
	I0420 00:47:57.471002  112536 provision.go:84] configureAuth start
	I0420 00:47:57.471011  112536 main.go:141] libmachine: (multinode-059001) Calling .GetMachineName
	I0420 00:47:57.471329  112536 main.go:141] libmachine: (multinode-059001) Calling .GetIP
	I0420 00:47:57.473952  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.474307  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:47:57.474328  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.474462  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:47:57.476685  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.477185  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:47:57.477237  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.477277  112536 provision.go:143] copyHostCerts
	I0420 00:47:57.477350  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:47:57.477391  112536 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 00:47:57.477402  112536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:47:57.477467  112536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 00:47:57.477592  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:47:57.477614  112536 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 00:47:57.477619  112536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:47:57.477649  112536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 00:47:57.477716  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:47:57.477731  112536 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 00:47:57.477735  112536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:47:57.477756  112536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 00:47:57.477865  112536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.multinode-059001 san=[127.0.0.1 192.168.39.200 localhost minikube multinode-059001]
	I0420 00:47:57.612193  112536 provision.go:177] copyRemoteCerts
	I0420 00:47:57.612258  112536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 00:47:57.612284  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:47:57.615039  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.615523  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:47:57.615566  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.615740  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:47:57.615924  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.616126  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:47:57.616251  112536 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/multinode-059001/id_rsa Username:docker}
	I0420 00:47:57.704307  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0420 00:47:57.704386  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 00:47:57.735015  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0420 00:47:57.735094  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0420 00:47:57.762111  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0420 00:47:57.762173  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 00:47:57.789401  112536 provision.go:87] duration metric: took 318.385376ms to configureAuth
	I0420 00:47:57.789427  112536 buildroot.go:189] setting minikube options for container-runtime
	I0420 00:47:57.789687  112536 config.go:182] Loaded profile config "multinode-059001": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:47:57.789777  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:47:57.792332  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.792776  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:47:57.792806  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.792980  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:47:57.793183  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.793386  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.793560  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:47:57.793736  112536 main.go:141] libmachine: Using SSH client type: native
	I0420 00:47:57.793950  112536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0420 00:47:57.793966  112536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 00:49:28.684701  112536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 00:49:28.684747  112536 machine.go:97] duration metric: took 1m31.565040203s to provisionDockerMachine
	I0420 00:49:28.684764  112536 start.go:293] postStartSetup for "multinode-059001" (driver="kvm2")
	I0420 00:49:28.684831  112536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 00:49:28.684863  112536 main.go:141] libmachine: (multinode-059001) Calling .DriverName
	I0420 00:49:28.685294  112536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 00:49:28.685343  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:49:28.688944  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.689409  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:49:28.689442  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.689573  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:49:28.689797  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:49:28.689972  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:49:28.690119  112536 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/multinode-059001/id_rsa Username:docker}
	I0420 00:49:28.774923  112536 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 00:49:28.779738  112536 command_runner.go:130] > NAME=Buildroot
	I0420 00:49:28.779761  112536 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0420 00:49:28.779774  112536 command_runner.go:130] > ID=buildroot
	I0420 00:49:28.779780  112536 command_runner.go:130] > VERSION_ID=2023.02.9
	I0420 00:49:28.779785  112536 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0420 00:49:28.779819  112536 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 00:49:28.779837  112536 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 00:49:28.779894  112536 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 00:49:28.779996  112536 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 00:49:28.780011  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /etc/ssl/certs/837422.pem
	I0420 00:49:28.780135  112536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 00:49:28.790922  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:49:28.819152  112536 start.go:296] duration metric: took 134.372664ms for postStartSetup
	I0420 00:49:28.819290  112536 fix.go:56] duration metric: took 1m31.719519569s for fixHost
	I0420 00:49:28.819321  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:49:28.822162  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.822550  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:49:28.822589  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.822712  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:49:28.822892  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:49:28.823037  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:49:28.823273  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:49:28.823482  112536 main.go:141] libmachine: Using SSH client type: native
	I0420 00:49:28.823688  112536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0420 00:49:28.823702  112536 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 00:49:28.926255  112536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713574168.903910655
	
	I0420 00:49:28.926277  112536 fix.go:216] guest clock: 1713574168.903910655
	I0420 00:49:28.926287  112536 fix.go:229] Guest: 2024-04-20 00:49:28.903910655 +0000 UTC Remote: 2024-04-20 00:49:28.819301079 +0000 UTC m=+91.854799305 (delta=84.609576ms)
	I0420 00:49:28.926310  112536 fix.go:200] guest clock delta is within tolerance: 84.609576ms
	I0420 00:49:28.926317  112536 start.go:83] releasing machines lock for "multinode-059001", held for 1m31.826578397s
	I0420 00:49:28.926344  112536 main.go:141] libmachine: (multinode-059001) Calling .DriverName
	I0420 00:49:28.926640  112536 main.go:141] libmachine: (multinode-059001) Calling .GetIP
	I0420 00:49:28.929290  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.929701  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:49:28.929728  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.929882  112536 main.go:141] libmachine: (multinode-059001) Calling .DriverName
	I0420 00:49:28.930359  112536 main.go:141] libmachine: (multinode-059001) Calling .DriverName
	I0420 00:49:28.930554  112536 main.go:141] libmachine: (multinode-059001) Calling .DriverName
	I0420 00:49:28.930665  112536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 00:49:28.930697  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:49:28.930767  112536 ssh_runner.go:195] Run: cat /version.json
	I0420 00:49:28.930789  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:49:28.933539  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.933771  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.933901  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:49:28.933930  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.934051  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:49:28.934158  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:49:28.934178  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.934222  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:49:28.934358  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:49:28.934378  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:49:28.934584  112536 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/multinode-059001/id_rsa Username:docker}
	I0420 00:49:28.934601  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:49:28.934729  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:49:28.934842  112536 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/multinode-059001/id_rsa Username:docker}
	I0420 00:49:29.010653  112536 command_runner.go:130] > {"iso_version": "v1.33.0", "kicbase_version": "v0.0.43-1713236840-18649", "minikube_version": "v1.33.0", "commit": "4bd203f0c710e7fdd30539846cf2bc6624a2556d"}
	I0420 00:49:29.010792  112536 ssh_runner.go:195] Run: systemctl --version
	I0420 00:49:29.035190  112536 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0420 00:49:29.035286  112536 command_runner.go:130] > systemd 252 (252)
	I0420 00:49:29.035316  112536 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0420 00:49:29.035386  112536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 00:49:29.208553  112536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0420 00:49:29.215884  112536 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0420 00:49:29.216372  112536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 00:49:29.216441  112536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 00:49:29.226739  112536 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0420 00:49:29.226771  112536 start.go:494] detecting cgroup driver to use...
	I0420 00:49:29.226841  112536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 00:49:29.244417  112536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 00:49:29.259510  112536 docker.go:217] disabling cri-docker service (if available) ...
	I0420 00:49:29.259556  112536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 00:49:29.274056  112536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 00:49:29.288252  112536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 00:49:29.434378  112536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 00:49:29.604516  112536 docker.go:233] disabling docker service ...
	I0420 00:49:29.604625  112536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 00:49:29.627883  112536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 00:49:29.642449  112536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 00:49:29.802096  112536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 00:49:29.950458  112536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 00:49:29.965707  112536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 00:49:29.987546  112536 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0420 00:49:29.988207  112536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 00:49:29.988260  112536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:49:30.000005  112536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 00:49:30.000085  112536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:49:30.011713  112536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:49:30.023295  112536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:49:30.034464  112536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 00:49:30.046038  112536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:49:30.057100  112536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:49:30.069776  112536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:49:30.080908  112536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 00:49:30.091008  112536 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0420 00:49:30.091084  112536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 00:49:30.100934  112536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:49:30.243280  112536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 00:49:30.491269  112536 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 00:49:30.491420  112536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 00:49:30.497797  112536 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0420 00:49:30.497825  112536 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0420 00:49:30.497834  112536 command_runner.go:130] > Device: 0,22	Inode: 1305        Links: 1
	I0420 00:49:30.497844  112536 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0420 00:49:30.497852  112536 command_runner.go:130] > Access: 2024-04-20 00:49:30.366609509 +0000
	I0420 00:49:30.497861  112536 command_runner.go:130] > Modify: 2024-04-20 00:49:30.366609509 +0000
	I0420 00:49:30.497870  112536 command_runner.go:130] > Change: 2024-04-20 00:49:30.366609509 +0000
	I0420 00:49:30.497876  112536 command_runner.go:130] >  Birth: -
	I0420 00:49:30.498074  112536 start.go:562] Will wait 60s for crictl version
	I0420 00:49:30.498139  112536 ssh_runner.go:195] Run: which crictl
	I0420 00:49:30.502699  112536 command_runner.go:130] > /usr/bin/crictl
	I0420 00:49:30.502769  112536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 00:49:30.549150  112536 command_runner.go:130] > Version:  0.1.0
	I0420 00:49:30.549187  112536 command_runner.go:130] > RuntimeName:  cri-o
	I0420 00:49:30.549194  112536 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0420 00:49:30.549201  112536 command_runner.go:130] > RuntimeApiVersion:  v1
	I0420 00:49:30.550443  112536 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 00:49:30.550539  112536 ssh_runner.go:195] Run: crio --version
	I0420 00:49:30.585405  112536 command_runner.go:130] > crio version 1.29.1
	I0420 00:49:30.585427  112536 command_runner.go:130] > Version:        1.29.1
	I0420 00:49:30.585433  112536 command_runner.go:130] > GitCommit:      unknown
	I0420 00:49:30.585437  112536 command_runner.go:130] > GitCommitDate:  unknown
	I0420 00:49:30.585442  112536 command_runner.go:130] > GitTreeState:   clean
	I0420 00:49:30.585447  112536 command_runner.go:130] > BuildDate:      2024-04-18T23:15:22Z
	I0420 00:49:30.585451  112536 command_runner.go:130] > GoVersion:      go1.21.6
	I0420 00:49:30.585455  112536 command_runner.go:130] > Compiler:       gc
	I0420 00:49:30.585459  112536 command_runner.go:130] > Platform:       linux/amd64
	I0420 00:49:30.585463  112536 command_runner.go:130] > Linkmode:       dynamic
	I0420 00:49:30.585468  112536 command_runner.go:130] > BuildTags:      
	I0420 00:49:30.585489  112536 command_runner.go:130] >   containers_image_ostree_stub
	I0420 00:49:30.585493  112536 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0420 00:49:30.585497  112536 command_runner.go:130] >   btrfs_noversion
	I0420 00:49:30.585502  112536 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0420 00:49:30.585506  112536 command_runner.go:130] >   libdm_no_deferred_remove
	I0420 00:49:30.585509  112536 command_runner.go:130] >   seccomp
	I0420 00:49:30.585513  112536 command_runner.go:130] > LDFlags:          unknown
	I0420 00:49:30.585517  112536 command_runner.go:130] > SeccompEnabled:   true
	I0420 00:49:30.585521  112536 command_runner.go:130] > AppArmorEnabled:  false
	I0420 00:49:30.586904  112536 ssh_runner.go:195] Run: crio --version
	I0420 00:49:30.619098  112536 command_runner.go:130] > crio version 1.29.1
	I0420 00:49:30.619123  112536 command_runner.go:130] > Version:        1.29.1
	I0420 00:49:30.619131  112536 command_runner.go:130] > GitCommit:      unknown
	I0420 00:49:30.619136  112536 command_runner.go:130] > GitCommitDate:  unknown
	I0420 00:49:30.619140  112536 command_runner.go:130] > GitTreeState:   clean
	I0420 00:49:30.619146  112536 command_runner.go:130] > BuildDate:      2024-04-18T23:15:22Z
	I0420 00:49:30.619150  112536 command_runner.go:130] > GoVersion:      go1.21.6
	I0420 00:49:30.619154  112536 command_runner.go:130] > Compiler:       gc
	I0420 00:49:30.619158  112536 command_runner.go:130] > Platform:       linux/amd64
	I0420 00:49:30.619162  112536 command_runner.go:130] > Linkmode:       dynamic
	I0420 00:49:30.619170  112536 command_runner.go:130] > BuildTags:      
	I0420 00:49:30.619174  112536 command_runner.go:130] >   containers_image_ostree_stub
	I0420 00:49:30.619180  112536 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0420 00:49:30.619184  112536 command_runner.go:130] >   btrfs_noversion
	I0420 00:49:30.619188  112536 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0420 00:49:30.619192  112536 command_runner.go:130] >   libdm_no_deferred_remove
	I0420 00:49:30.619196  112536 command_runner.go:130] >   seccomp
	I0420 00:49:30.619200  112536 command_runner.go:130] > LDFlags:          unknown
	I0420 00:49:30.619204  112536 command_runner.go:130] > SeccompEnabled:   true
	I0420 00:49:30.619208  112536 command_runner.go:130] > AppArmorEnabled:  false
	I0420 00:49:30.621079  112536 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 00:49:30.622498  112536 main.go:141] libmachine: (multinode-059001) Calling .GetIP
	I0420 00:49:30.625405  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:30.625784  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:49:30.625824  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:30.625996  112536 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0420 00:49:30.630857  112536 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0420 00:49:30.630962  112536 kubeadm.go:877] updating cluster {Name:multinode-059001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-059001 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.108 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 00:49:30.631157  112536 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:49:30.631223  112536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:49:30.675732  112536 command_runner.go:130] > {
	I0420 00:49:30.675762  112536 command_runner.go:130] >   "images": [
	I0420 00:49:30.675775  112536 command_runner.go:130] >     {
	I0420 00:49:30.675788  112536 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0420 00:49:30.675797  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.675808  112536 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0420 00:49:30.675815  112536 command_runner.go:130] >       ],
	I0420 00:49:30.675823  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.675837  112536 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0420 00:49:30.675852  112536 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0420 00:49:30.675857  112536 command_runner.go:130] >       ],
	I0420 00:49:30.675863  112536 command_runner.go:130] >       "size": "65291810",
	I0420 00:49:30.675868  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.675884  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.675898  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.675909  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.675919  112536 command_runner.go:130] >     },
	I0420 00:49:30.675929  112536 command_runner.go:130] >     {
	I0420 00:49:30.675941  112536 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0420 00:49:30.675950  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.675959  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0420 00:49:30.675967  112536 command_runner.go:130] >       ],
	I0420 00:49:30.675975  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.676053  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0420 00:49:30.676098  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0420 00:49:30.676119  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676127  112536 command_runner.go:130] >       "size": "1363676",
	I0420 00:49:30.676134  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.676155  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.676170  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.676178  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.676184  112536 command_runner.go:130] >     },
	I0420 00:49:30.676189  112536 command_runner.go:130] >     {
	I0420 00:49:30.676204  112536 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0420 00:49:30.676213  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.676224  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0420 00:49:30.676233  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676239  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.676265  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0420 00:49:30.676283  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0420 00:49:30.676292  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676299  112536 command_runner.go:130] >       "size": "31470524",
	I0420 00:49:30.676309  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.676315  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.676324  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.676329  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.676336  112536 command_runner.go:130] >     },
	I0420 00:49:30.676342  112536 command_runner.go:130] >     {
	I0420 00:49:30.676352  112536 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0420 00:49:30.676362  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.676371  112536 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0420 00:49:30.676380  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676387  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.676402  112536 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0420 00:49:30.676433  112536 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0420 00:49:30.676443  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676449  112536 command_runner.go:130] >       "size": "61245718",
	I0420 00:49:30.676456  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.676465  112536 command_runner.go:130] >       "username": "nonroot",
	I0420 00:49:30.676471  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.676480  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.676485  112536 command_runner.go:130] >     },
	I0420 00:49:30.676536  112536 command_runner.go:130] >     {
	I0420 00:49:30.676553  112536 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0420 00:49:30.676560  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.676568  112536 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0420 00:49:30.676578  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676583  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.676596  112536 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0420 00:49:30.676611  112536 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0420 00:49:30.676619  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676640  112536 command_runner.go:130] >       "size": "150779692",
	I0420 00:49:30.676651  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.676658  112536 command_runner.go:130] >         "value": "0"
	I0420 00:49:30.676673  112536 command_runner.go:130] >       },
	I0420 00:49:30.676688  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.676697  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.676703  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.676709  112536 command_runner.go:130] >     },
	I0420 00:49:30.676714  112536 command_runner.go:130] >     {
	I0420 00:49:30.676724  112536 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0420 00:49:30.676735  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.676743  112536 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0420 00:49:30.676751  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676758  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.676771  112536 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0420 00:49:30.676786  112536 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0420 00:49:30.676793  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676797  112536 command_runner.go:130] >       "size": "117609952",
	I0420 00:49:30.676803  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.676812  112536 command_runner.go:130] >         "value": "0"
	I0420 00:49:30.676819  112536 command_runner.go:130] >       },
	I0420 00:49:30.676829  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.676836  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.676845  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.676852  112536 command_runner.go:130] >     },
	I0420 00:49:30.676859  112536 command_runner.go:130] >     {
	I0420 00:49:30.676868  112536 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0420 00:49:30.676877  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.676885  112536 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0420 00:49:30.676894  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676901  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.676921  112536 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0420 00:49:30.676936  112536 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0420 00:49:30.676946  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676952  112536 command_runner.go:130] >       "size": "112170310",
	I0420 00:49:30.676958  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.676968  112536 command_runner.go:130] >         "value": "0"
	I0420 00:49:30.676974  112536 command_runner.go:130] >       },
	I0420 00:49:30.676980  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.676997  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.677002  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.677006  112536 command_runner.go:130] >     },
	I0420 00:49:30.677010  112536 command_runner.go:130] >     {
	I0420 00:49:30.677019  112536 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0420 00:49:30.677025  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.677032  112536 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0420 00:49:30.677038  112536 command_runner.go:130] >       ],
	I0420 00:49:30.677045  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.677079  112536 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0420 00:49:30.677114  112536 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0420 00:49:30.677120  112536 command_runner.go:130] >       ],
	I0420 00:49:30.677129  112536 command_runner.go:130] >       "size": "85932953",
	I0420 00:49:30.677139  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.677146  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.677156  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.677162  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.677167  112536 command_runner.go:130] >     },
	I0420 00:49:30.677171  112536 command_runner.go:130] >     {
	I0420 00:49:30.677178  112536 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0420 00:49:30.677184  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.677192  112536 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0420 00:49:30.677198  112536 command_runner.go:130] >       ],
	I0420 00:49:30.677227  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.677246  112536 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0420 00:49:30.677258  112536 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0420 00:49:30.677265  112536 command_runner.go:130] >       ],
	I0420 00:49:30.677272  112536 command_runner.go:130] >       "size": "63026502",
	I0420 00:49:30.677282  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.677288  112536 command_runner.go:130] >         "value": "0"
	I0420 00:49:30.677296  112536 command_runner.go:130] >       },
	I0420 00:49:30.677303  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.677324  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.677331  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.677338  112536 command_runner.go:130] >     },
	I0420 00:49:30.677343  112536 command_runner.go:130] >     {
	I0420 00:49:30.677373  112536 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0420 00:49:30.677383  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.677389  112536 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0420 00:49:30.677399  112536 command_runner.go:130] >       ],
	I0420 00:49:30.677406  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.677420  112536 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0420 00:49:30.677433  112536 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0420 00:49:30.677442  112536 command_runner.go:130] >       ],
	I0420 00:49:30.677449  112536 command_runner.go:130] >       "size": "750414",
	I0420 00:49:30.677456  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.677460  112536 command_runner.go:130] >         "value": "65535"
	I0420 00:49:30.677465  112536 command_runner.go:130] >       },
	I0420 00:49:30.677472  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.677479  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.677488  112536 command_runner.go:130] >       "pinned": true
	I0420 00:49:30.677494  112536 command_runner.go:130] >     }
	I0420 00:49:30.677506  112536 command_runner.go:130] >   ]
	I0420 00:49:30.677511  112536 command_runner.go:130] > }
	I0420 00:49:30.677781  112536 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 00:49:30.677795  112536 crio.go:433] Images already preloaded, skipping extraction
	I0420 00:49:30.677848  112536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:49:30.716526  112536 command_runner.go:130] > {
	I0420 00:49:30.716556  112536 command_runner.go:130] >   "images": [
	I0420 00:49:30.716563  112536 command_runner.go:130] >     {
	I0420 00:49:30.716576  112536 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0420 00:49:30.716594  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.716607  112536 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0420 00:49:30.716613  112536 command_runner.go:130] >       ],
	I0420 00:49:30.716620  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.716640  112536 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0420 00:49:30.716651  112536 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0420 00:49:30.716661  112536 command_runner.go:130] >       ],
	I0420 00:49:30.716668  112536 command_runner.go:130] >       "size": "65291810",
	I0420 00:49:30.716675  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.716680  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.716705  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.716716  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.716722  112536 command_runner.go:130] >     },
	I0420 00:49:30.716727  112536 command_runner.go:130] >     {
	I0420 00:49:30.716739  112536 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0420 00:49:30.716749  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.716758  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0420 00:49:30.716767  112536 command_runner.go:130] >       ],
	I0420 00:49:30.716774  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.716788  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0420 00:49:30.716797  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0420 00:49:30.716804  112536 command_runner.go:130] >       ],
	I0420 00:49:30.716808  112536 command_runner.go:130] >       "size": "1363676",
	I0420 00:49:30.716814  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.716820  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.716827  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.716831  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.716837  112536 command_runner.go:130] >     },
	I0420 00:49:30.716841  112536 command_runner.go:130] >     {
	I0420 00:49:30.716849  112536 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0420 00:49:30.716855  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.716861  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0420 00:49:30.716867  112536 command_runner.go:130] >       ],
	I0420 00:49:30.716872  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.716882  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0420 00:49:30.716891  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0420 00:49:30.716897  112536 command_runner.go:130] >       ],
	I0420 00:49:30.716901  112536 command_runner.go:130] >       "size": "31470524",
	I0420 00:49:30.716907  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.716911  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.716918  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.716922  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.716928  112536 command_runner.go:130] >     },
	I0420 00:49:30.716931  112536 command_runner.go:130] >     {
	I0420 00:49:30.716940  112536 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0420 00:49:30.716946  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.716956  112536 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0420 00:49:30.716962  112536 command_runner.go:130] >       ],
	I0420 00:49:30.716966  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.716976  112536 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0420 00:49:30.716995  112536 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0420 00:49:30.717001  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717005  112536 command_runner.go:130] >       "size": "61245718",
	I0420 00:49:30.717012  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.717016  112536 command_runner.go:130] >       "username": "nonroot",
	I0420 00:49:30.717024  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.717029  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.717033  112536 command_runner.go:130] >     },
	I0420 00:49:30.717038  112536 command_runner.go:130] >     {
	I0420 00:49:30.717046  112536 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0420 00:49:30.717053  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.717058  112536 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0420 00:49:30.717063  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717067  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.717076  112536 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0420 00:49:30.717089  112536 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0420 00:49:30.717097  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717104  112536 command_runner.go:130] >       "size": "150779692",
	I0420 00:49:30.717108  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.717111  112536 command_runner.go:130] >         "value": "0"
	I0420 00:49:30.717116  112536 command_runner.go:130] >       },
	I0420 00:49:30.717120  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.717126  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.717130  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.717136  112536 command_runner.go:130] >     },
	I0420 00:49:30.717140  112536 command_runner.go:130] >     {
	I0420 00:49:30.717145  112536 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0420 00:49:30.717152  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.717156  112536 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0420 00:49:30.717162  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717166  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.717176  112536 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0420 00:49:30.717191  112536 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0420 00:49:30.717196  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717200  112536 command_runner.go:130] >       "size": "117609952",
	I0420 00:49:30.717203  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.717210  112536 command_runner.go:130] >         "value": "0"
	I0420 00:49:30.717213  112536 command_runner.go:130] >       },
	I0420 00:49:30.717219  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.717223  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.717229  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.717233  112536 command_runner.go:130] >     },
	I0420 00:49:30.717238  112536 command_runner.go:130] >     {
	I0420 00:49:30.717245  112536 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0420 00:49:30.717251  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.717256  112536 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0420 00:49:30.717262  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717266  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.717276  112536 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0420 00:49:30.717285  112536 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0420 00:49:30.717291  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717297  112536 command_runner.go:130] >       "size": "112170310",
	I0420 00:49:30.717303  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.717317  112536 command_runner.go:130] >         "value": "0"
	I0420 00:49:30.717326  112536 command_runner.go:130] >       },
	I0420 00:49:30.717332  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.717341  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.717347  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.717351  112536 command_runner.go:130] >     },
	I0420 00:49:30.717356  112536 command_runner.go:130] >     {
	I0420 00:49:30.717362  112536 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0420 00:49:30.717369  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.717373  112536 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0420 00:49:30.717379  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717384  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.717406  112536 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0420 00:49:30.717416  112536 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0420 00:49:30.717420  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717431  112536 command_runner.go:130] >       "size": "85932953",
	I0420 00:49:30.717437  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.717441  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.717448  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.717451  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.717457  112536 command_runner.go:130] >     },
	I0420 00:49:30.717461  112536 command_runner.go:130] >     {
	I0420 00:49:30.717471  112536 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0420 00:49:30.717480  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.717492  112536 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0420 00:49:30.717501  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717510  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.717524  112536 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0420 00:49:30.717539  112536 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0420 00:49:30.717547  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717556  112536 command_runner.go:130] >       "size": "63026502",
	I0420 00:49:30.717565  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.717574  112536 command_runner.go:130] >         "value": "0"
	I0420 00:49:30.717586  112536 command_runner.go:130] >       },
	I0420 00:49:30.717600  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.717606  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.717610  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.717616  112536 command_runner.go:130] >     },
	I0420 00:49:30.717620  112536 command_runner.go:130] >     {
	I0420 00:49:30.717628  112536 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0420 00:49:30.717632  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.717639  112536 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0420 00:49:30.717642  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717649  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.717656  112536 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0420 00:49:30.717668  112536 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0420 00:49:30.717674  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717678  112536 command_runner.go:130] >       "size": "750414",
	I0420 00:49:30.717684  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.717688  112536 command_runner.go:130] >         "value": "65535"
	I0420 00:49:30.717692  112536 command_runner.go:130] >       },
	I0420 00:49:30.717705  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.717711  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.717715  112536 command_runner.go:130] >       "pinned": true
	I0420 00:49:30.717721  112536 command_runner.go:130] >     }
	I0420 00:49:30.717724  112536 command_runner.go:130] >   ]
	I0420 00:49:30.717728  112536 command_runner.go:130] > }
	I0420 00:49:30.717835  112536 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 00:49:30.717847  112536 cache_images.go:84] Images are preloaded, skipping loading
	I0420 00:49:30.717854  112536 kubeadm.go:928] updating node { 192.168.39.200 8443 v1.30.0 crio true true} ...
	I0420 00:49:30.718011  112536 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-059001 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-059001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 00:49:30.718099  112536 ssh_runner.go:195] Run: crio config
	I0420 00:49:30.758394  112536 command_runner.go:130] ! time="2024-04-20 00:49:30.736041994Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0420 00:49:30.769867  112536 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0420 00:49:30.777771  112536 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0420 00:49:30.777797  112536 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0420 00:49:30.777808  112536 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0420 00:49:30.777813  112536 command_runner.go:130] > #
	I0420 00:49:30.777822  112536 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0420 00:49:30.777832  112536 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0420 00:49:30.777842  112536 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0420 00:49:30.777863  112536 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0420 00:49:30.777872  112536 command_runner.go:130] > # reload'.
	I0420 00:49:30.777888  112536 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0420 00:49:30.777901  112536 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0420 00:49:30.777921  112536 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0420 00:49:30.777933  112536 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0420 00:49:30.777942  112536 command_runner.go:130] > [crio]
	I0420 00:49:30.777954  112536 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0420 00:49:30.777965  112536 command_runner.go:130] > # containers images, in this directory.
	I0420 00:49:30.777976  112536 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0420 00:49:30.777995  112536 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0420 00:49:30.778006  112536 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0420 00:49:30.778020  112536 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0420 00:49:30.778027  112536 command_runner.go:130] > # imagestore = ""
	I0420 00:49:30.778034  112536 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0420 00:49:30.778042  112536 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0420 00:49:30.778046  112536 command_runner.go:130] > storage_driver = "overlay"
	I0420 00:49:30.778055  112536 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0420 00:49:30.778060  112536 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0420 00:49:30.778064  112536 command_runner.go:130] > storage_option = [
	I0420 00:49:30.778068  112536 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0420 00:49:30.778071  112536 command_runner.go:130] > ]
	I0420 00:49:30.778080  112536 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0420 00:49:30.778086  112536 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0420 00:49:30.778093  112536 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0420 00:49:30.778098  112536 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0420 00:49:30.778106  112536 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0420 00:49:30.778114  112536 command_runner.go:130] > # always happen on a node reboot
	I0420 00:49:30.778119  112536 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0420 00:49:30.778139  112536 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0420 00:49:30.778148  112536 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0420 00:49:30.778153  112536 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0420 00:49:30.778160  112536 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0420 00:49:30.778167  112536 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0420 00:49:30.778177  112536 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0420 00:49:30.778183  112536 command_runner.go:130] > # internal_wipe = true
	I0420 00:49:30.778191  112536 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0420 00:49:30.778198  112536 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0420 00:49:30.778207  112536 command_runner.go:130] > # internal_repair = false
	I0420 00:49:30.778215  112536 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0420 00:49:30.778222  112536 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0420 00:49:30.778229  112536 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0420 00:49:30.778236  112536 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0420 00:49:30.778247  112536 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0420 00:49:30.778253  112536 command_runner.go:130] > [crio.api]
	I0420 00:49:30.778258  112536 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0420 00:49:30.778262  112536 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0420 00:49:30.778268  112536 command_runner.go:130] > # IP address on which the stream server will listen.
	I0420 00:49:30.778272  112536 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0420 00:49:30.778280  112536 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0420 00:49:30.778288  112536 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0420 00:49:30.778292  112536 command_runner.go:130] > # stream_port = "0"
	I0420 00:49:30.778300  112536 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0420 00:49:30.778304  112536 command_runner.go:130] > # stream_enable_tls = false
	I0420 00:49:30.778312  112536 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0420 00:49:30.778316  112536 command_runner.go:130] > # stream_idle_timeout = ""
	I0420 00:49:30.778325  112536 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0420 00:49:30.778331  112536 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0420 00:49:30.778337  112536 command_runner.go:130] > # minutes.
	I0420 00:49:30.778340  112536 command_runner.go:130] > # stream_tls_cert = ""
	I0420 00:49:30.778349  112536 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0420 00:49:30.778356  112536 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0420 00:49:30.778363  112536 command_runner.go:130] > # stream_tls_key = ""
	I0420 00:49:30.778369  112536 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0420 00:49:30.778377  112536 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0420 00:49:30.778396  112536 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0420 00:49:30.778402  112536 command_runner.go:130] > # stream_tls_ca = ""
	I0420 00:49:30.778410  112536 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0420 00:49:30.778417  112536 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0420 00:49:30.778424  112536 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0420 00:49:30.778436  112536 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0420 00:49:30.778444  112536 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0420 00:49:30.778451  112536 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0420 00:49:30.778458  112536 command_runner.go:130] > [crio.runtime]
	I0420 00:49:30.778468  112536 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0420 00:49:30.778479  112536 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0420 00:49:30.778487  112536 command_runner.go:130] > # "nofile=1024:2048"
	I0420 00:49:30.778500  112536 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0420 00:49:30.778510  112536 command_runner.go:130] > # default_ulimits = [
	I0420 00:49:30.778517  112536 command_runner.go:130] > # ]
	I0420 00:49:30.778526  112536 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0420 00:49:30.778535  112536 command_runner.go:130] > # no_pivot = false
	I0420 00:49:30.778551  112536 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0420 00:49:30.778564  112536 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0420 00:49:30.778575  112536 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0420 00:49:30.778587  112536 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0420 00:49:30.778598  112536 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0420 00:49:30.778611  112536 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0420 00:49:30.778618  112536 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0420 00:49:30.778623  112536 command_runner.go:130] > # Cgroup setting for conmon
	I0420 00:49:30.778629  112536 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0420 00:49:30.778640  112536 command_runner.go:130] > conmon_cgroup = "pod"
	I0420 00:49:30.778647  112536 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0420 00:49:30.778654  112536 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0420 00:49:30.778661  112536 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0420 00:49:30.778667  112536 command_runner.go:130] > conmon_env = [
	I0420 00:49:30.778673  112536 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0420 00:49:30.778679  112536 command_runner.go:130] > ]
	I0420 00:49:30.778684  112536 command_runner.go:130] > # Additional environment variables to set for all the
	I0420 00:49:30.778691  112536 command_runner.go:130] > # containers. These are overridden if set in the
	I0420 00:49:30.778696  112536 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0420 00:49:30.778703  112536 command_runner.go:130] > # default_env = [
	I0420 00:49:30.778706  112536 command_runner.go:130] > # ]
	I0420 00:49:30.778714  112536 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0420 00:49:30.778721  112536 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0420 00:49:30.778727  112536 command_runner.go:130] > # selinux = false
	I0420 00:49:30.778733  112536 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0420 00:49:30.778741  112536 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0420 00:49:30.778747  112536 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0420 00:49:30.778754  112536 command_runner.go:130] > # seccomp_profile = ""
	I0420 00:49:30.778767  112536 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0420 00:49:30.778775  112536 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0420 00:49:30.778781  112536 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0420 00:49:30.778787  112536 command_runner.go:130] > # which might increase security.
	I0420 00:49:30.778792  112536 command_runner.go:130] > # This option is currently deprecated,
	I0420 00:49:30.778797  112536 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0420 00:49:30.778804  112536 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0420 00:49:30.778810  112536 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0420 00:49:30.778819  112536 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0420 00:49:30.778825  112536 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0420 00:49:30.778835  112536 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0420 00:49:30.778842  112536 command_runner.go:130] > # This option supports live configuration reload.
	I0420 00:49:30.778846  112536 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0420 00:49:30.778854  112536 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0420 00:49:30.778858  112536 command_runner.go:130] > # the cgroup blockio controller.
	I0420 00:49:30.778863  112536 command_runner.go:130] > # blockio_config_file = ""
	I0420 00:49:30.778870  112536 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0420 00:49:30.778876  112536 command_runner.go:130] > # blockio parameters.
	I0420 00:49:30.778880  112536 command_runner.go:130] > # blockio_reload = false
	I0420 00:49:30.778887  112536 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0420 00:49:30.778893  112536 command_runner.go:130] > # irqbalance daemon.
	I0420 00:49:30.778898  112536 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0420 00:49:30.778907  112536 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0420 00:49:30.778913  112536 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0420 00:49:30.778922  112536 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0420 00:49:30.778927  112536 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0420 00:49:30.778935  112536 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0420 00:49:30.778940  112536 command_runner.go:130] > # This option supports live configuration reload.
	I0420 00:49:30.778947  112536 command_runner.go:130] > # rdt_config_file = ""
	I0420 00:49:30.778952  112536 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0420 00:49:30.778958  112536 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0420 00:49:30.778987  112536 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0420 00:49:30.778995  112536 command_runner.go:130] > # separate_pull_cgroup = ""
	I0420 00:49:30.779003  112536 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0420 00:49:30.779009  112536 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0420 00:49:30.779015  112536 command_runner.go:130] > # will be added.
	I0420 00:49:30.779130  112536 command_runner.go:130] > # default_capabilities = [
	I0420 00:49:30.779271  112536 command_runner.go:130] > # 	"CHOWN",
	I0420 00:49:30.779285  112536 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0420 00:49:30.779290  112536 command_runner.go:130] > # 	"FSETID",
	I0420 00:49:30.779296  112536 command_runner.go:130] > # 	"FOWNER",
	I0420 00:49:30.779302  112536 command_runner.go:130] > # 	"SETGID",
	I0420 00:49:30.779308  112536 command_runner.go:130] > # 	"SETUID",
	I0420 00:49:30.779315  112536 command_runner.go:130] > # 	"SETPCAP",
	I0420 00:49:30.779410  112536 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0420 00:49:30.779441  112536 command_runner.go:130] > # 	"KILL",
	I0420 00:49:30.779446  112536 command_runner.go:130] > # ]
	I0420 00:49:30.779463  112536 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0420 00:49:30.779487  112536 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0420 00:49:30.779498  112536 command_runner.go:130] > # add_inheritable_capabilities = false
	I0420 00:49:30.779514  112536 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0420 00:49:30.779528  112536 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0420 00:49:30.779535  112536 command_runner.go:130] > default_sysctls = [
	I0420 00:49:30.779542  112536 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0420 00:49:30.779547  112536 command_runner.go:130] > ]
	I0420 00:49:30.779560  112536 command_runner.go:130] > # List of devices on the host that a
	I0420 00:49:30.779570  112536 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0420 00:49:30.779576  112536 command_runner.go:130] > # allowed_devices = [
	I0420 00:49:30.779582  112536 command_runner.go:130] > # 	"/dev/fuse",
	I0420 00:49:30.779587  112536 command_runner.go:130] > # ]
	I0420 00:49:30.779600  112536 command_runner.go:130] > # List of additional devices. specified as
	I0420 00:49:30.779611  112536 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0420 00:49:30.779620  112536 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0420 00:49:30.779634  112536 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0420 00:49:30.779640  112536 command_runner.go:130] > # additional_devices = [
	I0420 00:49:30.779646  112536 command_runner.go:130] > # ]
	I0420 00:49:30.779654  112536 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0420 00:49:30.779660  112536 command_runner.go:130] > # cdi_spec_dirs = [
	I0420 00:49:30.779671  112536 command_runner.go:130] > # 	"/etc/cdi",
	I0420 00:49:30.779677  112536 command_runner.go:130] > # 	"/var/run/cdi",
	I0420 00:49:30.779683  112536 command_runner.go:130] > # ]
	I0420 00:49:30.779692  112536 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0420 00:49:30.779708  112536 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0420 00:49:30.779715  112536 command_runner.go:130] > # Defaults to false.
	I0420 00:49:30.779723  112536 command_runner.go:130] > # device_ownership_from_security_context = false
	I0420 00:49:30.779734  112536 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0420 00:49:30.779748  112536 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0420 00:49:30.779754  112536 command_runner.go:130] > # hooks_dir = [
	I0420 00:49:30.779762  112536 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0420 00:49:30.779767  112536 command_runner.go:130] > # ]
	I0420 00:49:30.779788  112536 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0420 00:49:30.779799  112536 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0420 00:49:30.779807  112536 command_runner.go:130] > # its default mounts from the following two files:
	I0420 00:49:30.779812  112536 command_runner.go:130] > #
	I0420 00:49:30.779826  112536 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0420 00:49:30.779837  112536 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0420 00:49:30.779853  112536 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0420 00:49:30.779858  112536 command_runner.go:130] > #
	I0420 00:49:30.779868  112536 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0420 00:49:30.779879  112536 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0420 00:49:30.779894  112536 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0420 00:49:30.779904  112536 command_runner.go:130] > #      only add mounts it finds in this file.
	I0420 00:49:30.779908  112536 command_runner.go:130] > #
	I0420 00:49:30.779918  112536 command_runner.go:130] > # default_mounts_file = ""
	I0420 00:49:30.779931  112536 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0420 00:49:30.779940  112536 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0420 00:49:30.779946  112536 command_runner.go:130] > pids_limit = 1024
	I0420 00:49:30.779960  112536 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0420 00:49:30.779968  112536 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0420 00:49:30.779977  112536 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0420 00:49:30.779994  112536 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0420 00:49:30.780000  112536 command_runner.go:130] > # log_size_max = -1
	I0420 00:49:30.780014  112536 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0420 00:49:30.780023  112536 command_runner.go:130] > # log_to_journald = false
	I0420 00:49:30.780034  112536 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0420 00:49:30.780049  112536 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0420 00:49:30.780067  112536 command_runner.go:130] > # Path to directory for container attach sockets.
	I0420 00:49:30.780080  112536 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0420 00:49:30.780090  112536 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0420 00:49:30.780104  112536 command_runner.go:130] > # bind_mount_prefix = ""
	I0420 00:49:30.780141  112536 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0420 00:49:30.780148  112536 command_runner.go:130] > # read_only = false
	I0420 00:49:30.780163  112536 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0420 00:49:30.780185  112536 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0420 00:49:30.780197  112536 command_runner.go:130] > # live configuration reload.
	I0420 00:49:30.780207  112536 command_runner.go:130] > # log_level = "info"
	I0420 00:49:30.780279  112536 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0420 00:49:30.780348  112536 command_runner.go:130] > # This option supports live configuration reload.
	I0420 00:49:30.780360  112536 command_runner.go:130] > # log_filter = ""
	I0420 00:49:30.780375  112536 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0420 00:49:30.780398  112536 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0420 00:49:30.780411  112536 command_runner.go:130] > # separated by comma.
	I0420 00:49:30.780423  112536 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0420 00:49:30.780435  112536 command_runner.go:130] > # uid_mappings = ""
	I0420 00:49:30.780445  112536 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0420 00:49:30.780457  112536 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0420 00:49:30.780472  112536 command_runner.go:130] > # separated by comma.
	I0420 00:49:30.780487  112536 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0420 00:49:30.780496  112536 command_runner.go:130] > # gid_mappings = ""
	I0420 00:49:30.780515  112536 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0420 00:49:30.780525  112536 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0420 00:49:30.780534  112536 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0420 00:49:30.780550  112536 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0420 00:49:30.780559  112536 command_runner.go:130] > # minimum_mappable_uid = -1
	I0420 00:49:30.780568  112536 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0420 00:49:30.780582  112536 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0420 00:49:30.780591  112536 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0420 00:49:30.780607  112536 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0420 00:49:30.780614  112536 command_runner.go:130] > # minimum_mappable_gid = -1
	I0420 00:49:30.780623  112536 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0420 00:49:30.780636  112536 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0420 00:49:30.780644  112536 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0420 00:49:30.780651  112536 command_runner.go:130] > # ctr_stop_timeout = 30
	I0420 00:49:30.780664  112536 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0420 00:49:30.780673  112536 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0420 00:49:30.780685  112536 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0420 00:49:30.780693  112536 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0420 00:49:30.780769  112536 command_runner.go:130] > drop_infra_ctr = false
	I0420 00:49:30.780779  112536 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0420 00:49:30.781010  112536 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0420 00:49:30.781035  112536 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0420 00:49:30.781044  112536 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0420 00:49:30.781059  112536 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0420 00:49:30.781067  112536 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0420 00:49:30.781079  112536 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0420 00:49:30.781091  112536 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0420 00:49:30.781101  112536 command_runner.go:130] > # shared_cpuset = ""
	I0420 00:49:30.781114  112536 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0420 00:49:30.781126  112536 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0420 00:49:30.781135  112536 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0420 00:49:30.781150  112536 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0420 00:49:30.781157  112536 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0420 00:49:30.781165  112536 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0420 00:49:30.781186  112536 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0420 00:49:30.781199  112536 command_runner.go:130] > # enable_criu_support = false
	I0420 00:49:30.781207  112536 command_runner.go:130] > # Enable/disable the generation of the container,
	I0420 00:49:30.781220  112536 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0420 00:49:30.781231  112536 command_runner.go:130] > # enable_pod_events = false
	I0420 00:49:30.781241  112536 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0420 00:49:30.781254  112536 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0420 00:49:30.781266  112536 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0420 00:49:30.781275  112536 command_runner.go:130] > # default_runtime = "runc"
	I0420 00:49:30.781287  112536 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0420 00:49:30.781302  112536 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0420 00:49:30.781332  112536 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0420 00:49:30.781344  112536 command_runner.go:130] > # creation as a file is not desired either.
	I0420 00:49:30.781360  112536 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0420 00:49:30.781371  112536 command_runner.go:130] > # the hostname is being managed dynamically.
	I0420 00:49:30.781382  112536 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0420 00:49:30.781388  112536 command_runner.go:130] > # ]
	I0420 00:49:30.781396  112536 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0420 00:49:30.781410  112536 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0420 00:49:30.781423  112536 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0420 00:49:30.781435  112536 command_runner.go:130] > # Each entry in the table should follow the format:
	I0420 00:49:30.781445  112536 command_runner.go:130] > #
	I0420 00:49:30.781456  112536 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0420 00:49:30.781465  112536 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0420 00:49:30.781519  112536 command_runner.go:130] > # runtime_type = "oci"
	I0420 00:49:30.781540  112536 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0420 00:49:30.781561  112536 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0420 00:49:30.781580  112536 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0420 00:49:30.781593  112536 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0420 00:49:30.781611  112536 command_runner.go:130] > # monitor_env = []
	I0420 00:49:30.781629  112536 command_runner.go:130] > # privileged_without_host_devices = false
	I0420 00:49:30.781656  112536 command_runner.go:130] > # allowed_annotations = []
	I0420 00:49:30.781669  112536 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0420 00:49:30.781677  112536 command_runner.go:130] > # Where:
	I0420 00:49:30.781683  112536 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0420 00:49:30.781692  112536 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0420 00:49:30.781698  112536 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0420 00:49:30.781704  112536 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0420 00:49:30.781712  112536 command_runner.go:130] > #   in $PATH.
	I0420 00:49:30.781717  112536 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0420 00:49:30.781722  112536 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0420 00:49:30.781728  112536 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0420 00:49:30.781734  112536 command_runner.go:130] > #   state.
	I0420 00:49:30.781741  112536 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0420 00:49:30.781749  112536 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0420 00:49:30.781757  112536 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0420 00:49:30.781763  112536 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0420 00:49:30.781770  112536 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0420 00:49:30.781779  112536 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0420 00:49:30.781784  112536 command_runner.go:130] > #   The currently recognized values are:
	I0420 00:49:30.781793  112536 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0420 00:49:30.781801  112536 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0420 00:49:30.781809  112536 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0420 00:49:30.781815  112536 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0420 00:49:30.781825  112536 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0420 00:49:30.781832  112536 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0420 00:49:30.781842  112536 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0420 00:49:30.781851  112536 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0420 00:49:30.781860  112536 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0420 00:49:30.781868  112536 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0420 00:49:30.781873  112536 command_runner.go:130] > #   deprecated option "conmon".
	I0420 00:49:30.781880  112536 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0420 00:49:30.781885  112536 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0420 00:49:30.781894  112536 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0420 00:49:30.781899  112536 command_runner.go:130] > #   should be moved to the container's cgroup
	I0420 00:49:30.781905  112536 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0420 00:49:30.781912  112536 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0420 00:49:30.781919  112536 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0420 00:49:30.781926  112536 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0420 00:49:30.781929  112536 command_runner.go:130] > #
	I0420 00:49:30.781934  112536 command_runner.go:130] > # Using the seccomp notifier feature:
	I0420 00:49:30.781942  112536 command_runner.go:130] > #
	I0420 00:49:30.781950  112536 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0420 00:49:30.781957  112536 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0420 00:49:30.781962  112536 command_runner.go:130] > #
	I0420 00:49:30.781968  112536 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0420 00:49:30.781976  112536 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0420 00:49:30.781979  112536 command_runner.go:130] > #
	I0420 00:49:30.781985  112536 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0420 00:49:30.781990  112536 command_runner.go:130] > # feature.
	I0420 00:49:30.781994  112536 command_runner.go:130] > #
	I0420 00:49:30.781999  112536 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0420 00:49:30.782008  112536 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0420 00:49:30.782013  112536 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0420 00:49:30.782021  112536 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0420 00:49:30.782028  112536 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0420 00:49:30.782034  112536 command_runner.go:130] > #
	I0420 00:49:30.782042  112536 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0420 00:49:30.782050  112536 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0420 00:49:30.782053  112536 command_runner.go:130] > #
	I0420 00:49:30.782059  112536 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0420 00:49:30.782068  112536 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0420 00:49:30.782071  112536 command_runner.go:130] > #
	I0420 00:49:30.782076  112536 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0420 00:49:30.782083  112536 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0420 00:49:30.782087  112536 command_runner.go:130] > # limitation.
	I0420 00:49:30.782093  112536 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0420 00:49:30.782099  112536 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0420 00:49:30.782104  112536 command_runner.go:130] > runtime_type = "oci"
	I0420 00:49:30.782111  112536 command_runner.go:130] > runtime_root = "/run/runc"
	I0420 00:49:30.782114  112536 command_runner.go:130] > runtime_config_path = ""
	I0420 00:49:30.782119  112536 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0420 00:49:30.782126  112536 command_runner.go:130] > monitor_cgroup = "pod"
	I0420 00:49:30.782130  112536 command_runner.go:130] > monitor_exec_cgroup = ""
	I0420 00:49:30.782135  112536 command_runner.go:130] > monitor_env = [
	I0420 00:49:30.782141  112536 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0420 00:49:30.782147  112536 command_runner.go:130] > ]
	I0420 00:49:30.782152  112536 command_runner.go:130] > privileged_without_host_devices = false
	I0420 00:49:30.782158  112536 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0420 00:49:30.782166  112536 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0420 00:49:30.782172  112536 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0420 00:49:30.782181  112536 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0420 00:49:30.782191  112536 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0420 00:49:30.782199  112536 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0420 00:49:30.782208  112536 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0420 00:49:30.782218  112536 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0420 00:49:30.782223  112536 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0420 00:49:30.782231  112536 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0420 00:49:30.782237  112536 command_runner.go:130] > # Example:
	I0420 00:49:30.782241  112536 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0420 00:49:30.782249  112536 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0420 00:49:30.782254  112536 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0420 00:49:30.782260  112536 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0420 00:49:30.782264  112536 command_runner.go:130] > # cpuset = 0
	I0420 00:49:30.782268  112536 command_runner.go:130] > # cpushares = "0-1"
	I0420 00:49:30.782271  112536 command_runner.go:130] > # Where:
	I0420 00:49:30.782276  112536 command_runner.go:130] > # The workload name is workload-type.
	I0420 00:49:30.782282  112536 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0420 00:49:30.782290  112536 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0420 00:49:30.782295  112536 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0420 00:49:30.782305  112536 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0420 00:49:30.782311  112536 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0420 00:49:30.782317  112536 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0420 00:49:30.782323  112536 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0420 00:49:30.782330  112536 command_runner.go:130] > # Default value is set to true
	I0420 00:49:30.782334  112536 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0420 00:49:30.782342  112536 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0420 00:49:30.782346  112536 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0420 00:49:30.782353  112536 command_runner.go:130] > # Default value is set to 'false'
	I0420 00:49:30.782358  112536 command_runner.go:130] > # disable_hostport_mapping = false
	I0420 00:49:30.782367  112536 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0420 00:49:30.782370  112536 command_runner.go:130] > #
	I0420 00:49:30.782379  112536 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0420 00:49:30.782384  112536 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0420 00:49:30.782390  112536 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0420 00:49:30.782397  112536 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0420 00:49:30.782401  112536 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0420 00:49:30.782407  112536 command_runner.go:130] > [crio.image]
	I0420 00:49:30.782412  112536 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0420 00:49:30.782416  112536 command_runner.go:130] > # default_transport = "docker://"
	I0420 00:49:30.782422  112536 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0420 00:49:30.782427  112536 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0420 00:49:30.782431  112536 command_runner.go:130] > # global_auth_file = ""
	I0420 00:49:30.782435  112536 command_runner.go:130] > # The image used to instantiate infra containers.
	I0420 00:49:30.782440  112536 command_runner.go:130] > # This option supports live configuration reload.
	I0420 00:49:30.782444  112536 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0420 00:49:30.782452  112536 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0420 00:49:30.782458  112536 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0420 00:49:30.782463  112536 command_runner.go:130] > # This option supports live configuration reload.
	I0420 00:49:30.782469  112536 command_runner.go:130] > # pause_image_auth_file = ""
	I0420 00:49:30.782476  112536 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0420 00:49:30.782484  112536 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0420 00:49:30.782493  112536 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0420 00:49:30.782506  112536 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0420 00:49:30.782512  112536 command_runner.go:130] > # pause_command = "/pause"
	I0420 00:49:30.782521  112536 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0420 00:49:30.782533  112536 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0420 00:49:30.782545  112536 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0420 00:49:30.782560  112536 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0420 00:49:30.782573  112536 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0420 00:49:30.782584  112536 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0420 00:49:30.782593  112536 command_runner.go:130] > # pinned_images = [
	I0420 00:49:30.782599  112536 command_runner.go:130] > # ]
	I0420 00:49:30.782610  112536 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0420 00:49:30.782618  112536 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0420 00:49:30.782624  112536 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0420 00:49:30.782637  112536 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0420 00:49:30.782645  112536 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0420 00:49:30.782649  112536 command_runner.go:130] > # signature_policy = ""
	I0420 00:49:30.782655  112536 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0420 00:49:30.782662  112536 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0420 00:49:30.782670  112536 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0420 00:49:30.782676  112536 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0420 00:49:30.782688  112536 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0420 00:49:30.782698  112536 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0420 00:49:30.782704  112536 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0420 00:49:30.782713  112536 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0420 00:49:30.782717  112536 command_runner.go:130] > # changing them here.
	I0420 00:49:30.782724  112536 command_runner.go:130] > # insecure_registries = [
	I0420 00:49:30.782727  112536 command_runner.go:130] > # ]
	I0420 00:49:30.782735  112536 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0420 00:49:30.782740  112536 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0420 00:49:30.782747  112536 command_runner.go:130] > # image_volumes = "mkdir"
	I0420 00:49:30.782755  112536 command_runner.go:130] > # Temporary directory to use for storing big files
	I0420 00:49:30.782759  112536 command_runner.go:130] > # big_files_temporary_dir = ""
	I0420 00:49:30.782766  112536 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0420 00:49:30.782772  112536 command_runner.go:130] > # CNI plugins.
	I0420 00:49:30.782776  112536 command_runner.go:130] > [crio.network]
	I0420 00:49:30.782784  112536 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0420 00:49:30.782790  112536 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0420 00:49:30.782796  112536 command_runner.go:130] > # cni_default_network = ""
	I0420 00:49:30.782802  112536 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0420 00:49:30.782808  112536 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0420 00:49:30.782813  112536 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0420 00:49:30.782819  112536 command_runner.go:130] > # plugin_dirs = [
	I0420 00:49:30.782823  112536 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0420 00:49:30.782826  112536 command_runner.go:130] > # ]
	I0420 00:49:30.782831  112536 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0420 00:49:30.782835  112536 command_runner.go:130] > [crio.metrics]
	I0420 00:49:30.782840  112536 command_runner.go:130] > # Globally enable or disable metrics support.
	I0420 00:49:30.782846  112536 command_runner.go:130] > enable_metrics = true
	I0420 00:49:30.782851  112536 command_runner.go:130] > # Specify enabled metrics collectors.
	I0420 00:49:30.782858  112536 command_runner.go:130] > # Per default all metrics are enabled.
	I0420 00:49:30.782864  112536 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0420 00:49:30.782873  112536 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0420 00:49:30.782878  112536 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0420 00:49:30.782884  112536 command_runner.go:130] > # metrics_collectors = [
	I0420 00:49:30.782888  112536 command_runner.go:130] > # 	"operations",
	I0420 00:49:30.782892  112536 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0420 00:49:30.782899  112536 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0420 00:49:30.782903  112536 command_runner.go:130] > # 	"operations_errors",
	I0420 00:49:30.782908  112536 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0420 00:49:30.782913  112536 command_runner.go:130] > # 	"image_pulls_by_name",
	I0420 00:49:30.782918  112536 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0420 00:49:30.782924  112536 command_runner.go:130] > # 	"image_pulls_failures",
	I0420 00:49:30.782930  112536 command_runner.go:130] > # 	"image_pulls_successes",
	I0420 00:49:30.782937  112536 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0420 00:49:30.782941  112536 command_runner.go:130] > # 	"image_layer_reuse",
	I0420 00:49:30.782946  112536 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0420 00:49:30.782953  112536 command_runner.go:130] > # 	"containers_oom_total",
	I0420 00:49:30.782956  112536 command_runner.go:130] > # 	"containers_oom",
	I0420 00:49:30.782962  112536 command_runner.go:130] > # 	"processes_defunct",
	I0420 00:49:30.782966  112536 command_runner.go:130] > # 	"operations_total",
	I0420 00:49:30.782971  112536 command_runner.go:130] > # 	"operations_latency_seconds",
	I0420 00:49:30.782978  112536 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0420 00:49:30.782982  112536 command_runner.go:130] > # 	"operations_errors_total",
	I0420 00:49:30.782986  112536 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0420 00:49:30.783002  112536 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0420 00:49:30.783011  112536 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0420 00:49:30.783016  112536 command_runner.go:130] > # 	"image_pulls_success_total",
	I0420 00:49:30.783022  112536 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0420 00:49:30.783027  112536 command_runner.go:130] > # 	"containers_oom_count_total",
	I0420 00:49:30.783033  112536 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0420 00:49:30.783038  112536 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0420 00:49:30.783042  112536 command_runner.go:130] > # ]
	I0420 00:49:30.783047  112536 command_runner.go:130] > # The port on which the metrics server will listen.
	I0420 00:49:30.783053  112536 command_runner.go:130] > # metrics_port = 9090
	I0420 00:49:30.783058  112536 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0420 00:49:30.783064  112536 command_runner.go:130] > # metrics_socket = ""
	I0420 00:49:30.783069  112536 command_runner.go:130] > # The certificate for the secure metrics server.
	I0420 00:49:30.783075  112536 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0420 00:49:30.783083  112536 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0420 00:49:30.783088  112536 command_runner.go:130] > # certificate on any modification event.
	I0420 00:49:30.783094  112536 command_runner.go:130] > # metrics_cert = ""
	I0420 00:49:30.783099  112536 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0420 00:49:30.783106  112536 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0420 00:49:30.783110  112536 command_runner.go:130] > # metrics_key = ""
	I0420 00:49:30.783115  112536 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0420 00:49:30.783119  112536 command_runner.go:130] > [crio.tracing]
	I0420 00:49:30.783127  112536 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0420 00:49:30.783131  112536 command_runner.go:130] > # enable_tracing = false
	I0420 00:49:30.783137  112536 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0420 00:49:30.783142  112536 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0420 00:49:30.783150  112536 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0420 00:49:30.783159  112536 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0420 00:49:30.783164  112536 command_runner.go:130] > # CRI-O NRI configuration.
	I0420 00:49:30.783170  112536 command_runner.go:130] > [crio.nri]
	I0420 00:49:30.783174  112536 command_runner.go:130] > # Globally enable or disable NRI.
	I0420 00:49:30.783178  112536 command_runner.go:130] > # enable_nri = false
	I0420 00:49:30.783183  112536 command_runner.go:130] > # NRI socket to listen on.
	I0420 00:49:30.783192  112536 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0420 00:49:30.783199  112536 command_runner.go:130] > # NRI plugin directory to use.
	I0420 00:49:30.783204  112536 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0420 00:49:30.783211  112536 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0420 00:49:30.783216  112536 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0420 00:49:30.783221  112536 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0420 00:49:30.783228  112536 command_runner.go:130] > # nri_disable_connections = false
	I0420 00:49:30.783233  112536 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0420 00:49:30.783237  112536 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0420 00:49:30.783244  112536 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0420 00:49:30.783249  112536 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0420 00:49:30.783257  112536 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0420 00:49:30.783261  112536 command_runner.go:130] > [crio.stats]
	I0420 00:49:30.783269  112536 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0420 00:49:30.783275  112536 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0420 00:49:30.783282  112536 command_runner.go:130] > # stats_collection_period = 0
	I0420 00:49:30.783401  112536 cni.go:84] Creating CNI manager for ""
	I0420 00:49:30.783414  112536 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0420 00:49:30.783425  112536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 00:49:30.783445  112536 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-059001 NodeName:multinode-059001 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 00:49:30.783642  112536 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-059001"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 00:49:30.783723  112536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 00:49:30.795058  112536 command_runner.go:130] > kubeadm
	I0420 00:49:30.795081  112536 command_runner.go:130] > kubectl
	I0420 00:49:30.795088  112536 command_runner.go:130] > kubelet
	I0420 00:49:30.795110  112536 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 00:49:30.795174  112536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 00:49:30.805330  112536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0420 00:49:30.823866  112536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 00:49:30.843271  112536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0420 00:49:30.862128  112536 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I0420 00:49:30.866579  112536 command_runner.go:130] > 192.168.39.200	control-plane.minikube.internal
	I0420 00:49:30.866657  112536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:49:31.011744  112536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:49:31.028284  112536 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001 for IP: 192.168.39.200
	I0420 00:49:31.028315  112536 certs.go:194] generating shared ca certs ...
	I0420 00:49:31.028337  112536 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:49:31.028526  112536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 00:49:31.028584  112536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 00:49:31.028597  112536 certs.go:256] generating profile certs ...
	I0420 00:49:31.028695  112536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/client.key
	I0420 00:49:31.028752  112536 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/apiserver.key.73861182
	I0420 00:49:31.028805  112536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/proxy-client.key
	I0420 00:49:31.028818  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0420 00:49:31.028833  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0420 00:49:31.028845  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0420 00:49:31.028856  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0420 00:49:31.028868  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0420 00:49:31.028881  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0420 00:49:31.028893  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0420 00:49:31.028905  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0420 00:49:31.028957  112536 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 00:49:31.028989  112536 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 00:49:31.028999  112536 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 00:49:31.029019  112536 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 00:49:31.029042  112536 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 00:49:31.029061  112536 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 00:49:31.029097  112536 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:49:31.029131  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem -> /usr/share/ca-certificates/83742.pem
	I0420 00:49:31.029145  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /usr/share/ca-certificates/837422.pem
	I0420 00:49:31.029156  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:49:31.029863  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 00:49:31.057216  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 00:49:31.082773  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 00:49:31.108867  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 00:49:31.134835  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0420 00:49:31.160520  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 00:49:31.186784  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 00:49:31.213175  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0420 00:49:31.239183  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 00:49:31.265028  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 00:49:31.291295  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 00:49:31.316318  112536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 00:49:31.334828  112536 ssh_runner.go:195] Run: openssl version
	I0420 00:49:31.341270  112536 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0420 00:49:31.341372  112536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 00:49:31.353571  112536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 00:49:31.358437  112536 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 00:49:31.358576  112536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 00:49:31.358628  112536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 00:49:31.364707  112536 command_runner.go:130] > 3ec20f2e
	I0420 00:49:31.364775  112536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 00:49:31.375556  112536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 00:49:31.387697  112536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:49:31.393386  112536 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:49:31.393411  112536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:49:31.393448  112536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:49:31.399416  112536 command_runner.go:130] > b5213941
	I0420 00:49:31.399512  112536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 00:49:31.410186  112536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 00:49:31.422373  112536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 00:49:31.427363  112536 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 00:49:31.427390  112536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 00:49:31.427426  112536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 00:49:31.433585  112536 command_runner.go:130] > 51391683
	I0420 00:49:31.433641  112536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 00:49:31.444195  112536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 00:49:31.449048  112536 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 00:49:31.449066  112536 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0420 00:49:31.449072  112536 command_runner.go:130] > Device: 253,1	Inode: 2104342     Links: 1
	I0420 00:49:31.449078  112536 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0420 00:49:31.449083  112536 command_runner.go:130] > Access: 2024-04-20 00:43:14.037961774 +0000
	I0420 00:49:31.449088  112536 command_runner.go:130] > Modify: 2024-04-20 00:43:14.037961774 +0000
	I0420 00:49:31.449093  112536 command_runner.go:130] > Change: 2024-04-20 00:43:14.037961774 +0000
	I0420 00:49:31.449100  112536 command_runner.go:130] >  Birth: 2024-04-20 00:43:14.037961774 +0000
	I0420 00:49:31.449176  112536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 00:49:31.455208  112536 command_runner.go:130] > Certificate will not expire
	I0420 00:49:31.455262  112536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 00:49:31.461056  112536 command_runner.go:130] > Certificate will not expire
	I0420 00:49:31.461243  112536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 00:49:31.467176  112536 command_runner.go:130] > Certificate will not expire
	I0420 00:49:31.467238  112536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 00:49:31.473891  112536 command_runner.go:130] > Certificate will not expire
	I0420 00:49:31.473952  112536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 00:49:31.479885  112536 command_runner.go:130] > Certificate will not expire
	I0420 00:49:31.479932  112536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 00:49:31.485689  112536 command_runner.go:130] > Certificate will not expire
	I0420 00:49:31.485988  112536 kubeadm.go:391] StartCluster: {Name:multinode-059001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-059001 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.108 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:f
alse kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:49:31.486106  112536 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 00:49:31.486143  112536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 00:49:31.525870  112536 command_runner.go:130] > f3eb2c3e1d64f749c4a33a890bf03a2469f9d04d15bf36b4abfcedbf37c11d87
	I0420 00:49:31.525894  112536 command_runner.go:130] > 965cc419b10f36404b1558be87e8f817b003ad469e3ce84651b28b55bc60b969
	I0420 00:49:31.525900  112536 command_runner.go:130] > 0cdf1f27fc4a775bdb1bd07aca352e8375ad5df4a0ac4f9844f6731ab60ba0fa
	I0420 00:49:31.525907  112536 command_runner.go:130] > 278b79bc7d7b5493e7659e02415efc2edf732ce4e95961ab096cf68068cb2c95
	I0420 00:49:31.525912  112536 command_runner.go:130] > e6b41406ce7bb57c290c09411bc7850ed947848da5b369d197c7de10f99cc175
	I0420 00:49:31.525918  112536 command_runner.go:130] > 339a729cde4f15511548279f70978ed3269d7198f64ba32a003790f3bb2bd1eb
	I0420 00:49:31.525923  112536 command_runner.go:130] > 81d365f1385c877c7c0e983fc2fcdafa619322c001fce172d3d29450e5d3d53c
	I0420 00:49:31.525932  112536 command_runner.go:130] > b8e65c0c15cef8d42afec5611dd88b24133e9f162cd54535518c9f25729dcfc7
	I0420 00:49:31.527410  112536 cri.go:89] found id: "f3eb2c3e1d64f749c4a33a890bf03a2469f9d04d15bf36b4abfcedbf37c11d87"
	I0420 00:49:31.527431  112536 cri.go:89] found id: "965cc419b10f36404b1558be87e8f817b003ad469e3ce84651b28b55bc60b969"
	I0420 00:49:31.527437  112536 cri.go:89] found id: "0cdf1f27fc4a775bdb1bd07aca352e8375ad5df4a0ac4f9844f6731ab60ba0fa"
	I0420 00:49:31.527442  112536 cri.go:89] found id: "278b79bc7d7b5493e7659e02415efc2edf732ce4e95961ab096cf68068cb2c95"
	I0420 00:49:31.527445  112536 cri.go:89] found id: "e6b41406ce7bb57c290c09411bc7850ed947848da5b369d197c7de10f99cc175"
	I0420 00:49:31.527450  112536 cri.go:89] found id: "339a729cde4f15511548279f70978ed3269d7198f64ba32a003790f3bb2bd1eb"
	I0420 00:49:31.527454  112536 cri.go:89] found id: "81d365f1385c877c7c0e983fc2fcdafa619322c001fce172d3d29450e5d3d53c"
	I0420 00:49:31.527458  112536 cri.go:89] found id: "b8e65c0c15cef8d42afec5611dd88b24133e9f162cd54535518c9f25729dcfc7"
	I0420 00:49:31.527461  112536 cri.go:89] found id: ""
	I0420 00:49:31.527512  112536 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.318202500Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713574258318179666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e701377-17ce-45d1-aa6f-cad517f8c7db name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.319103027Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23a31925-b7a7-408b-8d11-f551c311a981 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.319183276Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23a31925-b7a7-408b-8d11-f551c311a981 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.319777170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c373d9ab9916e6c97f3c326b821922d2b75e734231d5e691275538ed6dd352dd,PodSandboxId:c017c6d333758c4e1b1e4effddfc6c56eae50f96da8ed0078147005856385edc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713574212462664810,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xlthm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cecb2998-715e-4d88-bea0-1cbece396619,},Annotations:map[string]string{io.kubernetes.container.hash: 96639d9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75d3e908c6a1d767bee0f970657c6dd2ec7c785094ee7e8174e8b6bead9eb35,PodSandboxId:2788e89cb28c46293860818725963f909b316cabd9cc84fcb1e8a22181947892,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713574179072929436,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nrhgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc879522-987c-4e38-bdb1-949a9d934334,},Annotations:map[string]string{io.kubernetes.container.hash: d795ed4f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e5bf4a09cd8321581c4874326d57b43cb6b128503e0a7689c8f3f439547696,PodSandboxId:fe978d5a63406946295962456854669ea9769a400eac874e7aaa44c8491d0804,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713574178796341941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78rrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0e9030-0d37-40b8-bb06-621b526ca289,},Annotations:map[string]string{io.kubernetes.container.hash: b3e0e2af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a8a21632a3f8138cf9baf78d98e99ffb89ba79c6dcb2fb4bf331d33e55ecc5,PodSandboxId:aaa286949dad8993f3974de3fea3c741d3f17f781c0c7bb8ddf4f733f8f3fae2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713574178753409689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139b40e9-a1ec-4035-88a9-e382b2ee6293,},An
notations:map[string]string{io.kubernetes.container.hash: d83b8429,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de6f3bfc27ff52e665aeea65f29380aacd63f3616dad1947aba138059bf66af,PodSandboxId:e29602718b91850c80b2e9c44f4aa38a3e02888082494f02867b6ea7c12e88b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713574178671788316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blctg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64ab7435-a9ee-432d-8b87-58c3e4c7a147,},Annotations:map[string]string{io.ku
bernetes.container.hash: e4e3a721,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06122a5f65bc233da5812210b7739fd4d498d9c11f7b786cff2e2574315b535b,PodSandboxId:4e9538c73ad90c10c58f61635c331ccf183de9a61d4c1023c4f3b66f358abe49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713574173902116355,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2ebac18edd173db2b08c1b57ae6104,},Annotations:map[string
]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec26c00543f48944316f9c22d404b319e5291332349b3fde5f30beaf6a17766,PodSandboxId:3c997b5238fd83589c33364348861f0bd2b48e90ef904807e757d8b73db91e00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713574173886215827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccbd724481c76f6013288282b8986ae2,},Annotations:map[string]string{io.kubernetes.container.hash: 70aa52
2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f78f6b91aa74df6a71fda29b2aa790b8049ec4615015da4cbff4961fca992a,PodSandboxId:c348aeb6b3c82f332d3880270a5c307ee7ec368f643e09a47f5ac76c1ea47d7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713574173864354334,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c048501d08bb4d4c228b17991d7668e,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8dc5eb92c25a81376e5bc22d48ea950cfab5d2a9e85631f1f9dce9014b8ec2,PodSandboxId:0982ad8753fce485895d750299c436fcb0cc0c564edfbf31dfea886ea7b74cdf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713574173792922346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b672f9613f8c5eb6391347653d552133,},Annotations:map[string]string{io.kubernetes.container.hash: 84e0b7c5,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fe5ec0ecb57f0e82408ca966347ca87580fff9c19829f6a7d70f3f080cf9f3,PodSandboxId:6da2ff71dd49dfaa998acb831908cac2140d89c2b3c838c042b1bc35aae1c2dd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713573870179457901,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xlthm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cecb2998-715e-4d88-bea0-1cbece396619,},Annotations:map[string]string{io.kubernetes.container.hash: 96639d9f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3eb2c3e1d64f749c4a33a890bf03a2469f9d04d15bf36b4abfcedbf37c11d87,PodSandboxId:596fbfbd40b0b4c23f81de04a4c41efb03af7eb42307c2821c732e927c5370c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713573819989310919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78rrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0e9030-0d37-40b8-bb06-621b526ca289,},Annotations:map[string]string{io.kubernetes.container.hash: b3e0e2af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:965cc419b10f36404b1558be87e8f817b003ad469e3ce84651b28b55bc60b969,PodSandboxId:355895ccdd2c5766937723d682c03e151b277ec709ceaaef82498a6532e9423c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713573819963026060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 139b40e9-a1ec-4035-88a9-e382b2ee6293,},Annotations:map[string]string{io.kubernetes.container.hash: d83b8429,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cdf1f27fc4a775bdb1bd07aca352e8375ad5df4a0ac4f9844f6731ab60ba0fa,PodSandboxId:879a6cbe15b42171d5d10281b94041febdf0c636bd4b2e49cc0bdb5ffd056c95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713573818339220687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blctg,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 64ab7435-a9ee-432d-8b87-58c3e4c7a147,},Annotations:map[string]string{io.kubernetes.container.hash: e4e3a721,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278b79bc7d7b5493e7659e02415efc2edf732ce4e95961ab096cf68068cb2c95,PodSandboxId:7a3bea706d980121fcec93816d4e91c120df59efe3fb669dbe21ebe22b0113bf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713573818167001593,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nrhgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc879522-987c-4e38-bdb1
-949a9d934334,},Annotations:map[string]string{io.kubernetes.container.hash: d795ed4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b41406ce7bb57c290c09411bc7850ed947848da5b369d197c7de10f99cc175,PodSandboxId:0713688d466e52a2c687fde5eab6b6728c28928f688338f1d2291bb4ac90b30b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713573797702898456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccbd724481c76f6013288282b8986ae2,},Annotations:map[string]string
{io.kubernetes.container.hash: 70aa522c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d365f1385c877c7c0e983fc2fcdafa619322c001fce172d3d29450e5d3d53c,PodSandboxId:209b4833115b9ef8224439c5a8fec12e9155896992835ec5286f6d2a8e8a15f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713573797651921097,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2ebac18edd173db2b08c1b57ae6104,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339a729cde4f15511548279f70978ed3269d7198f64ba32a003790f3bb2bd1eb,PodSandboxId:a8c2ccf6a610f7f06ed346ee7c807a16889d78c2d214c682cf8b4de58eab1bb9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713573797663346072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b672f9613f8c5eb6391347653d552133,},Annotations:map[string]string{io
.kubernetes.container.hash: 84e0b7c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8e65c0c15cef8d42afec5611dd88b24133e9f162cd54535518c9f25729dcfc7,PodSandboxId:9fe3a6907a5ae3c1930afaba6a04f92f4977d630ef2fce072aff30eacb46eaa6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713573797591317956,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c048501d08bb4d4c228b17991d7668e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23a31925-b7a7-408b-8d11-f551c311a981 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.371465567Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce30b81f-6223-4e03-a764-919ca0ef9816 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.371643757Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce30b81f-6223-4e03-a764-919ca0ef9816 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.373480678Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b523818-11a7-4d98-a4ad-3b4d57a076e9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.374117609Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713574258374092895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b523818-11a7-4d98-a4ad-3b4d57a076e9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.374825642Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2275ce94-54ce-4eb0-adf0-91bf17d64517 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.375010144Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2275ce94-54ce-4eb0-adf0-91bf17d64517 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.375439231Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c373d9ab9916e6c97f3c326b821922d2b75e734231d5e691275538ed6dd352dd,PodSandboxId:c017c6d333758c4e1b1e4effddfc6c56eae50f96da8ed0078147005856385edc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713574212462664810,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xlthm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cecb2998-715e-4d88-bea0-1cbece396619,},Annotations:map[string]string{io.kubernetes.container.hash: 96639d9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75d3e908c6a1d767bee0f970657c6dd2ec7c785094ee7e8174e8b6bead9eb35,PodSandboxId:2788e89cb28c46293860818725963f909b316cabd9cc84fcb1e8a22181947892,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713574179072929436,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nrhgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc879522-987c-4e38-bdb1-949a9d934334,},Annotations:map[string]string{io.kubernetes.container.hash: d795ed4f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e5bf4a09cd8321581c4874326d57b43cb6b128503e0a7689c8f3f439547696,PodSandboxId:fe978d5a63406946295962456854669ea9769a400eac874e7aaa44c8491d0804,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713574178796341941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78rrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0e9030-0d37-40b8-bb06-621b526ca289,},Annotations:map[string]string{io.kubernetes.container.hash: b3e0e2af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a8a21632a3f8138cf9baf78d98e99ffb89ba79c6dcb2fb4bf331d33e55ecc5,PodSandboxId:aaa286949dad8993f3974de3fea3c741d3f17f781c0c7bb8ddf4f733f8f3fae2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713574178753409689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139b40e9-a1ec-4035-88a9-e382b2ee6293,},An
notations:map[string]string{io.kubernetes.container.hash: d83b8429,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de6f3bfc27ff52e665aeea65f29380aacd63f3616dad1947aba138059bf66af,PodSandboxId:e29602718b91850c80b2e9c44f4aa38a3e02888082494f02867b6ea7c12e88b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713574178671788316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blctg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64ab7435-a9ee-432d-8b87-58c3e4c7a147,},Annotations:map[string]string{io.ku
bernetes.container.hash: e4e3a721,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06122a5f65bc233da5812210b7739fd4d498d9c11f7b786cff2e2574315b535b,PodSandboxId:4e9538c73ad90c10c58f61635c331ccf183de9a61d4c1023c4f3b66f358abe49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713574173902116355,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2ebac18edd173db2b08c1b57ae6104,},Annotations:map[string
]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec26c00543f48944316f9c22d404b319e5291332349b3fde5f30beaf6a17766,PodSandboxId:3c997b5238fd83589c33364348861f0bd2b48e90ef904807e757d8b73db91e00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713574173886215827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccbd724481c76f6013288282b8986ae2,},Annotations:map[string]string{io.kubernetes.container.hash: 70aa52
2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f78f6b91aa74df6a71fda29b2aa790b8049ec4615015da4cbff4961fca992a,PodSandboxId:c348aeb6b3c82f332d3880270a5c307ee7ec368f643e09a47f5ac76c1ea47d7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713574173864354334,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c048501d08bb4d4c228b17991d7668e,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8dc5eb92c25a81376e5bc22d48ea950cfab5d2a9e85631f1f9dce9014b8ec2,PodSandboxId:0982ad8753fce485895d750299c436fcb0cc0c564edfbf31dfea886ea7b74cdf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713574173792922346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b672f9613f8c5eb6391347653d552133,},Annotations:map[string]string{io.kubernetes.container.hash: 84e0b7c5,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fe5ec0ecb57f0e82408ca966347ca87580fff9c19829f6a7d70f3f080cf9f3,PodSandboxId:6da2ff71dd49dfaa998acb831908cac2140d89c2b3c838c042b1bc35aae1c2dd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713573870179457901,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xlthm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cecb2998-715e-4d88-bea0-1cbece396619,},Annotations:map[string]string{io.kubernetes.container.hash: 96639d9f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3eb2c3e1d64f749c4a33a890bf03a2469f9d04d15bf36b4abfcedbf37c11d87,PodSandboxId:596fbfbd40b0b4c23f81de04a4c41efb03af7eb42307c2821c732e927c5370c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713573819989310919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78rrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0e9030-0d37-40b8-bb06-621b526ca289,},Annotations:map[string]string{io.kubernetes.container.hash: b3e0e2af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:965cc419b10f36404b1558be87e8f817b003ad469e3ce84651b28b55bc60b969,PodSandboxId:355895ccdd2c5766937723d682c03e151b277ec709ceaaef82498a6532e9423c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713573819963026060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 139b40e9-a1ec-4035-88a9-e382b2ee6293,},Annotations:map[string]string{io.kubernetes.container.hash: d83b8429,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cdf1f27fc4a775bdb1bd07aca352e8375ad5df4a0ac4f9844f6731ab60ba0fa,PodSandboxId:879a6cbe15b42171d5d10281b94041febdf0c636bd4b2e49cc0bdb5ffd056c95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713573818339220687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blctg,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 64ab7435-a9ee-432d-8b87-58c3e4c7a147,},Annotations:map[string]string{io.kubernetes.container.hash: e4e3a721,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278b79bc7d7b5493e7659e02415efc2edf732ce4e95961ab096cf68068cb2c95,PodSandboxId:7a3bea706d980121fcec93816d4e91c120df59efe3fb669dbe21ebe22b0113bf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713573818167001593,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nrhgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc879522-987c-4e38-bdb1
-949a9d934334,},Annotations:map[string]string{io.kubernetes.container.hash: d795ed4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b41406ce7bb57c290c09411bc7850ed947848da5b369d197c7de10f99cc175,PodSandboxId:0713688d466e52a2c687fde5eab6b6728c28928f688338f1d2291bb4ac90b30b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713573797702898456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccbd724481c76f6013288282b8986ae2,},Annotations:map[string]string
{io.kubernetes.container.hash: 70aa522c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d365f1385c877c7c0e983fc2fcdafa619322c001fce172d3d29450e5d3d53c,PodSandboxId:209b4833115b9ef8224439c5a8fec12e9155896992835ec5286f6d2a8e8a15f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713573797651921097,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2ebac18edd173db2b08c1b57ae6104,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339a729cde4f15511548279f70978ed3269d7198f64ba32a003790f3bb2bd1eb,PodSandboxId:a8c2ccf6a610f7f06ed346ee7c807a16889d78c2d214c682cf8b4de58eab1bb9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713573797663346072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b672f9613f8c5eb6391347653d552133,},Annotations:map[string]string{io
.kubernetes.container.hash: 84e0b7c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8e65c0c15cef8d42afec5611dd88b24133e9f162cd54535518c9f25729dcfc7,PodSandboxId:9fe3a6907a5ae3c1930afaba6a04f92f4977d630ef2fce072aff30eacb46eaa6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713573797591317956,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c048501d08bb4d4c228b17991d7668e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2275ce94-54ce-4eb0-adf0-91bf17d64517 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.433345112Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4eb9277-b393-4f17-88f9-731b1352410f name=/runtime.v1.RuntimeService/Version
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.433416364Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4eb9277-b393-4f17-88f9-731b1352410f name=/runtime.v1.RuntimeService/Version
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.435273095Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d252e7f2-6ded-4308-8168-4f2c3a73cea3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.435714622Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713574258435690646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d252e7f2-6ded-4308-8168-4f2c3a73cea3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.436684454Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=062b71e4-9f93-4a3e-b71e-a8cda5cce445 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.436738809Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=062b71e4-9f93-4a3e-b71e-a8cda5cce445 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.437082728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c373d9ab9916e6c97f3c326b821922d2b75e734231d5e691275538ed6dd352dd,PodSandboxId:c017c6d333758c4e1b1e4effddfc6c56eae50f96da8ed0078147005856385edc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713574212462664810,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xlthm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cecb2998-715e-4d88-bea0-1cbece396619,},Annotations:map[string]string{io.kubernetes.container.hash: 96639d9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75d3e908c6a1d767bee0f970657c6dd2ec7c785094ee7e8174e8b6bead9eb35,PodSandboxId:2788e89cb28c46293860818725963f909b316cabd9cc84fcb1e8a22181947892,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713574179072929436,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nrhgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc879522-987c-4e38-bdb1-949a9d934334,},Annotations:map[string]string{io.kubernetes.container.hash: d795ed4f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e5bf4a09cd8321581c4874326d57b43cb6b128503e0a7689c8f3f439547696,PodSandboxId:fe978d5a63406946295962456854669ea9769a400eac874e7aaa44c8491d0804,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713574178796341941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78rrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0e9030-0d37-40b8-bb06-621b526ca289,},Annotations:map[string]string{io.kubernetes.container.hash: b3e0e2af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a8a21632a3f8138cf9baf78d98e99ffb89ba79c6dcb2fb4bf331d33e55ecc5,PodSandboxId:aaa286949dad8993f3974de3fea3c741d3f17f781c0c7bb8ddf4f733f8f3fae2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713574178753409689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139b40e9-a1ec-4035-88a9-e382b2ee6293,},An
notations:map[string]string{io.kubernetes.container.hash: d83b8429,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de6f3bfc27ff52e665aeea65f29380aacd63f3616dad1947aba138059bf66af,PodSandboxId:e29602718b91850c80b2e9c44f4aa38a3e02888082494f02867b6ea7c12e88b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713574178671788316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blctg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64ab7435-a9ee-432d-8b87-58c3e4c7a147,},Annotations:map[string]string{io.ku
bernetes.container.hash: e4e3a721,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06122a5f65bc233da5812210b7739fd4d498d9c11f7b786cff2e2574315b535b,PodSandboxId:4e9538c73ad90c10c58f61635c331ccf183de9a61d4c1023c4f3b66f358abe49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713574173902116355,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2ebac18edd173db2b08c1b57ae6104,},Annotations:map[string
]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec26c00543f48944316f9c22d404b319e5291332349b3fde5f30beaf6a17766,PodSandboxId:3c997b5238fd83589c33364348861f0bd2b48e90ef904807e757d8b73db91e00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713574173886215827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccbd724481c76f6013288282b8986ae2,},Annotations:map[string]string{io.kubernetes.container.hash: 70aa52
2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f78f6b91aa74df6a71fda29b2aa790b8049ec4615015da4cbff4961fca992a,PodSandboxId:c348aeb6b3c82f332d3880270a5c307ee7ec368f643e09a47f5ac76c1ea47d7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713574173864354334,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c048501d08bb4d4c228b17991d7668e,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8dc5eb92c25a81376e5bc22d48ea950cfab5d2a9e85631f1f9dce9014b8ec2,PodSandboxId:0982ad8753fce485895d750299c436fcb0cc0c564edfbf31dfea886ea7b74cdf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713574173792922346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b672f9613f8c5eb6391347653d552133,},Annotations:map[string]string{io.kubernetes.container.hash: 84e0b7c5,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fe5ec0ecb57f0e82408ca966347ca87580fff9c19829f6a7d70f3f080cf9f3,PodSandboxId:6da2ff71dd49dfaa998acb831908cac2140d89c2b3c838c042b1bc35aae1c2dd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713573870179457901,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xlthm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cecb2998-715e-4d88-bea0-1cbece396619,},Annotations:map[string]string{io.kubernetes.container.hash: 96639d9f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3eb2c3e1d64f749c4a33a890bf03a2469f9d04d15bf36b4abfcedbf37c11d87,PodSandboxId:596fbfbd40b0b4c23f81de04a4c41efb03af7eb42307c2821c732e927c5370c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713573819989310919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78rrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0e9030-0d37-40b8-bb06-621b526ca289,},Annotations:map[string]string{io.kubernetes.container.hash: b3e0e2af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:965cc419b10f36404b1558be87e8f817b003ad469e3ce84651b28b55bc60b969,PodSandboxId:355895ccdd2c5766937723d682c03e151b277ec709ceaaef82498a6532e9423c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713573819963026060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 139b40e9-a1ec-4035-88a9-e382b2ee6293,},Annotations:map[string]string{io.kubernetes.container.hash: d83b8429,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cdf1f27fc4a775bdb1bd07aca352e8375ad5df4a0ac4f9844f6731ab60ba0fa,PodSandboxId:879a6cbe15b42171d5d10281b94041febdf0c636bd4b2e49cc0bdb5ffd056c95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713573818339220687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blctg,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 64ab7435-a9ee-432d-8b87-58c3e4c7a147,},Annotations:map[string]string{io.kubernetes.container.hash: e4e3a721,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278b79bc7d7b5493e7659e02415efc2edf732ce4e95961ab096cf68068cb2c95,PodSandboxId:7a3bea706d980121fcec93816d4e91c120df59efe3fb669dbe21ebe22b0113bf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713573818167001593,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nrhgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc879522-987c-4e38-bdb1
-949a9d934334,},Annotations:map[string]string{io.kubernetes.container.hash: d795ed4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b41406ce7bb57c290c09411bc7850ed947848da5b369d197c7de10f99cc175,PodSandboxId:0713688d466e52a2c687fde5eab6b6728c28928f688338f1d2291bb4ac90b30b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713573797702898456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccbd724481c76f6013288282b8986ae2,},Annotations:map[string]string
{io.kubernetes.container.hash: 70aa522c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d365f1385c877c7c0e983fc2fcdafa619322c001fce172d3d29450e5d3d53c,PodSandboxId:209b4833115b9ef8224439c5a8fec12e9155896992835ec5286f6d2a8e8a15f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713573797651921097,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2ebac18edd173db2b08c1b57ae6104,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339a729cde4f15511548279f70978ed3269d7198f64ba32a003790f3bb2bd1eb,PodSandboxId:a8c2ccf6a610f7f06ed346ee7c807a16889d78c2d214c682cf8b4de58eab1bb9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713573797663346072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b672f9613f8c5eb6391347653d552133,},Annotations:map[string]string{io
.kubernetes.container.hash: 84e0b7c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8e65c0c15cef8d42afec5611dd88b24133e9f162cd54535518c9f25729dcfc7,PodSandboxId:9fe3a6907a5ae3c1930afaba6a04f92f4977d630ef2fce072aff30eacb46eaa6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713573797591317956,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c048501d08bb4d4c228b17991d7668e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=062b71e4-9f93-4a3e-b71e-a8cda5cce445 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.488932032Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=33047ddd-0395-48c3-873c-2ab1a2c4d335 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.489324470Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=33047ddd-0395-48c3-873c-2ab1a2c4d335 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.494520116Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1beb62c0-ca7c-4dcf-90d7-75c7ac805bd0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.494994015Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713574258494969299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1beb62c0-ca7c-4dcf-90d7-75c7ac805bd0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.496382342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f153f77-8364-4086-b2f0-fb1808eacd85 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.496436775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f153f77-8364-4086-b2f0-fb1808eacd85 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:50:58 multinode-059001 crio[2847]: time="2024-04-20 00:50:58.496834440Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c373d9ab9916e6c97f3c326b821922d2b75e734231d5e691275538ed6dd352dd,PodSandboxId:c017c6d333758c4e1b1e4effddfc6c56eae50f96da8ed0078147005856385edc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713574212462664810,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xlthm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cecb2998-715e-4d88-bea0-1cbece396619,},Annotations:map[string]string{io.kubernetes.container.hash: 96639d9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75d3e908c6a1d767bee0f970657c6dd2ec7c785094ee7e8174e8b6bead9eb35,PodSandboxId:2788e89cb28c46293860818725963f909b316cabd9cc84fcb1e8a22181947892,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713574179072929436,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nrhgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc879522-987c-4e38-bdb1-949a9d934334,},Annotations:map[string]string{io.kubernetes.container.hash: d795ed4f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e5bf4a09cd8321581c4874326d57b43cb6b128503e0a7689c8f3f439547696,PodSandboxId:fe978d5a63406946295962456854669ea9769a400eac874e7aaa44c8491d0804,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713574178796341941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78rrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0e9030-0d37-40b8-bb06-621b526ca289,},Annotations:map[string]string{io.kubernetes.container.hash: b3e0e2af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a8a21632a3f8138cf9baf78d98e99ffb89ba79c6dcb2fb4bf331d33e55ecc5,PodSandboxId:aaa286949dad8993f3974de3fea3c741d3f17f781c0c7bb8ddf4f733f8f3fae2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713574178753409689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139b40e9-a1ec-4035-88a9-e382b2ee6293,},An
notations:map[string]string{io.kubernetes.container.hash: d83b8429,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de6f3bfc27ff52e665aeea65f29380aacd63f3616dad1947aba138059bf66af,PodSandboxId:e29602718b91850c80b2e9c44f4aa38a3e02888082494f02867b6ea7c12e88b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713574178671788316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blctg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64ab7435-a9ee-432d-8b87-58c3e4c7a147,},Annotations:map[string]string{io.ku
bernetes.container.hash: e4e3a721,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06122a5f65bc233da5812210b7739fd4d498d9c11f7b786cff2e2574315b535b,PodSandboxId:4e9538c73ad90c10c58f61635c331ccf183de9a61d4c1023c4f3b66f358abe49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713574173902116355,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2ebac18edd173db2b08c1b57ae6104,},Annotations:map[string
]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec26c00543f48944316f9c22d404b319e5291332349b3fde5f30beaf6a17766,PodSandboxId:3c997b5238fd83589c33364348861f0bd2b48e90ef904807e757d8b73db91e00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713574173886215827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccbd724481c76f6013288282b8986ae2,},Annotations:map[string]string{io.kubernetes.container.hash: 70aa52
2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f78f6b91aa74df6a71fda29b2aa790b8049ec4615015da4cbff4961fca992a,PodSandboxId:c348aeb6b3c82f332d3880270a5c307ee7ec368f643e09a47f5ac76c1ea47d7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713574173864354334,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c048501d08bb4d4c228b17991d7668e,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8dc5eb92c25a81376e5bc22d48ea950cfab5d2a9e85631f1f9dce9014b8ec2,PodSandboxId:0982ad8753fce485895d750299c436fcb0cc0c564edfbf31dfea886ea7b74cdf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713574173792922346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b672f9613f8c5eb6391347653d552133,},Annotations:map[string]string{io.kubernetes.container.hash: 84e0b7c5,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fe5ec0ecb57f0e82408ca966347ca87580fff9c19829f6a7d70f3f080cf9f3,PodSandboxId:6da2ff71dd49dfaa998acb831908cac2140d89c2b3c838c042b1bc35aae1c2dd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713573870179457901,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xlthm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cecb2998-715e-4d88-bea0-1cbece396619,},Annotations:map[string]string{io.kubernetes.container.hash: 96639d9f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3eb2c3e1d64f749c4a33a890bf03a2469f9d04d15bf36b4abfcedbf37c11d87,PodSandboxId:596fbfbd40b0b4c23f81de04a4c41efb03af7eb42307c2821c732e927c5370c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713573819989310919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78rrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0e9030-0d37-40b8-bb06-621b526ca289,},Annotations:map[string]string{io.kubernetes.container.hash: b3e0e2af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:965cc419b10f36404b1558be87e8f817b003ad469e3ce84651b28b55bc60b969,PodSandboxId:355895ccdd2c5766937723d682c03e151b277ec709ceaaef82498a6532e9423c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713573819963026060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 139b40e9-a1ec-4035-88a9-e382b2ee6293,},Annotations:map[string]string{io.kubernetes.container.hash: d83b8429,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cdf1f27fc4a775bdb1bd07aca352e8375ad5df4a0ac4f9844f6731ab60ba0fa,PodSandboxId:879a6cbe15b42171d5d10281b94041febdf0c636bd4b2e49cc0bdb5ffd056c95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713573818339220687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blctg,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 64ab7435-a9ee-432d-8b87-58c3e4c7a147,},Annotations:map[string]string{io.kubernetes.container.hash: e4e3a721,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278b79bc7d7b5493e7659e02415efc2edf732ce4e95961ab096cf68068cb2c95,PodSandboxId:7a3bea706d980121fcec93816d4e91c120df59efe3fb669dbe21ebe22b0113bf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713573818167001593,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nrhgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc879522-987c-4e38-bdb1
-949a9d934334,},Annotations:map[string]string{io.kubernetes.container.hash: d795ed4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b41406ce7bb57c290c09411bc7850ed947848da5b369d197c7de10f99cc175,PodSandboxId:0713688d466e52a2c687fde5eab6b6728c28928f688338f1d2291bb4ac90b30b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713573797702898456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccbd724481c76f6013288282b8986ae2,},Annotations:map[string]string
{io.kubernetes.container.hash: 70aa522c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d365f1385c877c7c0e983fc2fcdafa619322c001fce172d3d29450e5d3d53c,PodSandboxId:209b4833115b9ef8224439c5a8fec12e9155896992835ec5286f6d2a8e8a15f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713573797651921097,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2ebac18edd173db2b08c1b57ae6104,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339a729cde4f15511548279f70978ed3269d7198f64ba32a003790f3bb2bd1eb,PodSandboxId:a8c2ccf6a610f7f06ed346ee7c807a16889d78c2d214c682cf8b4de58eab1bb9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713573797663346072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b672f9613f8c5eb6391347653d552133,},Annotations:map[string]string{io
.kubernetes.container.hash: 84e0b7c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8e65c0c15cef8d42afec5611dd88b24133e9f162cd54535518c9f25729dcfc7,PodSandboxId:9fe3a6907a5ae3c1930afaba6a04f92f4977d630ef2fce072aff30eacb46eaa6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713573797591317956,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c048501d08bb4d4c228b17991d7668e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f153f77-8364-4086-b2f0-fb1808eacd85 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c373d9ab9916e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      46 seconds ago       Running             busybox                   1                   c017c6d333758       busybox-fc5497c4f-xlthm
	b75d3e908c6a1       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   2788e89cb28c4       kindnet-nrhgt
	67e5bf4a09cd8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   fe978d5a63406       coredns-7db6d8ff4d-78rrw
	e0a8a21632a3f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   aaa286949dad8       storage-provisioner
	1de6f3bfc27ff       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      About a minute ago   Running             kube-proxy                1                   e29602718b918       kube-proxy-blctg
	06122a5f65bc2       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Running             kube-controller-manager   1                   4e9538c73ad90       kube-controller-manager-multinode-059001
	2ec26c00543f4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   3c997b5238fd8       etcd-multinode-059001
	e2f78f6b91aa7       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      About a minute ago   Running             kube-scheduler            1                   c348aeb6b3c82       kube-scheduler-multinode-059001
	cc8dc5eb92c25       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            1                   0982ad8753fce       kube-apiserver-multinode-059001
	53fe5ec0ecb57       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   6da2ff71dd49d       busybox-fc5497c4f-xlthm
	f3eb2c3e1d64f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   596fbfbd40b0b       coredns-7db6d8ff4d-78rrw
	965cc419b10f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   355895ccdd2c5       storage-provisioner
	0cdf1f27fc4a7       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago        Exited              kube-proxy                0                   879a6cbe15b42       kube-proxy-blctg
	278b79bc7d7b5       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   7a3bea706d980       kindnet-nrhgt
	e6b41406ce7bb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   0713688d466e5       etcd-multinode-059001
	339a729cde4f1       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago        Exited              kube-apiserver            0                   a8c2ccf6a610f       kube-apiserver-multinode-059001
	81d365f1385c8       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago        Exited              kube-controller-manager   0                   209b4833115b9       kube-controller-manager-multinode-059001
	b8e65c0c15cef       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago        Exited              kube-scheduler            0                   9fe3a6907a5ae       kube-scheduler-multinode-059001
	
	
	==> coredns [67e5bf4a09cd8321581c4874326d57b43cb6b128503e0a7689c8f3f439547696] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59173 - 3994 "HINFO IN 3491305454883615209.2019287686508915915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017089179s
	
	
	==> coredns [f3eb2c3e1d64f749c4a33a890bf03a2469f9d04d15bf36b4abfcedbf37c11d87] <==
	[INFO] 10.244.0.3:56450 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001721997s
	[INFO] 10.244.0.3:55783 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081097s
	[INFO] 10.244.0.3:52177 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127753s
	[INFO] 10.244.0.3:54018 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001195973s
	[INFO] 10.244.0.3:32887 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084527s
	[INFO] 10.244.0.3:51028 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120484s
	[INFO] 10.244.0.3:58042 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069878s
	[INFO] 10.244.1.2:37561 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146597s
	[INFO] 10.244.1.2:39648 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102418s
	[INFO] 10.244.1.2:48176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138976s
	[INFO] 10.244.1.2:35182 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098899s
	[INFO] 10.244.0.3:44206 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167919s
	[INFO] 10.244.0.3:50161 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112368s
	[INFO] 10.244.0.3:38784 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000201148s
	[INFO] 10.244.0.3:54035 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088839s
	[INFO] 10.244.1.2:60259 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194856s
	[INFO] 10.244.1.2:49426 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154222s
	[INFO] 10.244.1.2:43083 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132217s
	[INFO] 10.244.1.2:58393 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011622s
	[INFO] 10.244.0.3:49089 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148632s
	[INFO] 10.244.0.3:38258 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138106s
	[INFO] 10.244.0.3:47064 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079089s
	[INFO] 10.244.0.3:43782 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099575s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-059001
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-059001
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=multinode-059001
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T00_43_24_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:43:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-059001
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:50:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:49:37 +0000   Sat, 20 Apr 2024 00:43:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:49:37 +0000   Sat, 20 Apr 2024 00:43:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:49:37 +0000   Sat, 20 Apr 2024 00:43:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:49:37 +0000   Sat, 20 Apr 2024 00:43:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    multinode-059001
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 151894ac4b6d4e5b9d2a7c732c17d3b5
	  System UUID:                151894ac-4b6d-4e5b-9d2a-7c732c17d3b5
	  Boot ID:                    2762f1af-74dd-4fd3-a09b-5b144bfaea57
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xlthm                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 coredns-7db6d8ff4d-78rrw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m22s
	  kube-system                 etcd-multinode-059001                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m35s
	  kube-system                 kindnet-nrhgt                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m22s
	  kube-system                 kube-apiserver-multinode-059001             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-controller-manager-multinode-059001    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-proxy-blctg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	  kube-system                 kube-scheduler-multinode-059001             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m20s              kube-proxy       
	  Normal  Starting                 79s                kube-proxy       
	  Normal  NodeHasSufficientMemory  7m42s              kubelet          Node multinode-059001 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  7m35s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m35s              kubelet          Node multinode-059001 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m35s              kubelet          Node multinode-059001 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m35s              kubelet          Node multinode-059001 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m35s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m23s              node-controller  Node multinode-059001 event: Registered Node multinode-059001 in Controller
	  Normal  NodeReady                7m19s              kubelet          Node multinode-059001 status is now: NodeReady
	  Normal  Starting                 85s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  85s (x8 over 85s)  kubelet          Node multinode-059001 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s (x8 over 85s)  kubelet          Node multinode-059001 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s (x7 over 85s)  kubelet          Node multinode-059001 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  85s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           68s                node-controller  Node multinode-059001 event: Registered Node multinode-059001 in Controller
	
	
	Name:               multinode-059001-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-059001-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=multinode-059001
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_20T00_50_20_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:50:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-059001-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:50:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:50:50 +0000   Sat, 20 Apr 2024 00:50:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:50:50 +0000   Sat, 20 Apr 2024 00:50:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:50:50 +0000   Sat, 20 Apr 2024 00:50:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:50:50 +0000   Sat, 20 Apr 2024 00:50:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.91
	  Hostname:    multinode-059001-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7de15afd65a4b16958d3dab80a21b11
	  System UUID:                d7de15af-d65a-4b16-958d-3dab80a21b11
	  Boot ID:                    3953b031-9153-439e-9151-da703721965e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-srrgg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kindnet-zfrjl              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m40s
	  kube-system                 kube-proxy-z5zrr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m35s                  kube-proxy  
	  Normal  Starting                 34s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m40s (x2 over 6m40s)  kubelet     Node multinode-059001-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m40s (x2 over 6m40s)  kubelet     Node multinode-059001-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m40s (x2 over 6m40s)  kubelet     Node multinode-059001-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m40s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m32s                  kubelet     Node multinode-059001-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  39s (x2 over 39s)      kubelet     Node multinode-059001-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x2 over 39s)      kubelet     Node multinode-059001-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x2 over 39s)      kubelet     Node multinode-059001-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                32s                    kubelet     Node multinode-059001-m02 status is now: NodeReady
	
	
	Name:               multinode-059001-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-059001-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=multinode-059001
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_20T00_50_48_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:50:47 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-059001-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:50:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:50:55 +0000   Sat, 20 Apr 2024 00:50:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:50:55 +0000   Sat, 20 Apr 2024 00:50:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:50:55 +0000   Sat, 20 Apr 2024 00:50:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:50:55 +0000   Sat, 20 Apr 2024 00:50:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.108
	  Hostname:    multinode-059001-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 962bce28a8744db98f0956aa8779b5ba
	  System UUID:                962bce28-a874-4db9-8f09-56aa8779b5ba
	  Boot ID:                    6217513a-087a-422f-ae6f-d9add3d412ca
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-mwlh8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m54s
	  kube-system                 kube-proxy-vx26z    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m49s                  kube-proxy  
	  Normal  Starting                 6s                     kube-proxy  
	  Normal  Starting                 5m10s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  5m54s (x2 over 5m55s)  kubelet     Node multinode-059001-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m54s (x2 over 5m55s)  kubelet     Node multinode-059001-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m54s (x2 over 5m55s)  kubelet     Node multinode-059001-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m54s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m46s                  kubelet     Node multinode-059001-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m13s (x2 over 5m13s)  kubelet     Node multinode-059001-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m13s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m13s (x2 over 5m13s)  kubelet     Node multinode-059001-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m13s (x2 over 5m13s)  kubelet     Node multinode-059001-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m7s                   kubelet     Node multinode-059001-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  11s (x2 over 11s)      kubelet     Node multinode-059001-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x2 over 11s)      kubelet     Node multinode-059001-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x2 over 11s)      kubelet     Node multinode-059001-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-059001-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.056282] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071956] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.205630] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.138173] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.306730] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.846277] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.059389] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.319814] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +1.066198] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.491580] systemd-fstab-generator[1287]: Ignoring "noauto" option for root device
	[  +0.091304] kauditd_printk_skb: 30 callbacks suppressed
	[ +14.097624] systemd-fstab-generator[1481]: Ignoring "noauto" option for root device
	[  +0.085979] kauditd_printk_skb: 21 callbacks suppressed
	[Apr20 00:44] kauditd_printk_skb: 84 callbacks suppressed
	[Apr20 00:49] systemd-fstab-generator[2766]: Ignoring "noauto" option for root device
	[  +0.150393] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +0.214637] systemd-fstab-generator[2792]: Ignoring "noauto" option for root device
	[  +0.152379] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.288296] systemd-fstab-generator[2832]: Ignoring "noauto" option for root device
	[  +0.773144] systemd-fstab-generator[2928]: Ignoring "noauto" option for root device
	[  +1.953984] systemd-fstab-generator[3054]: Ignoring "noauto" option for root device
	[  +5.708892] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.735922] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.395805] systemd-fstab-generator[3875]: Ignoring "noauto" option for root device
	[Apr20 00:50] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2ec26c00543f48944316f9c22d404b319e5291332349b3fde5f30beaf6a17766] <==
	{"level":"info","ts":"2024-04-20T00:49:34.473438Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T00:49:34.473452Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T00:49:34.473786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 switched to configuration voters=(1146381907749364645)"}
	{"level":"info","ts":"2024-04-20T00:49:34.473867Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1d37198946ef4128","local-member-id":"fe8c4457455e3a5","added-peer-id":"fe8c4457455e3a5","added-peer-peer-urls":["https://192.168.39.200:2380"]}
	{"level":"info","ts":"2024-04-20T00:49:34.474016Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d37198946ef4128","local-member-id":"fe8c4457455e3a5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:49:34.474069Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:49:34.512175Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-20T00:49:34.517986Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-04-20T00:49:34.518165Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-04-20T00:49:34.519835Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-20T00:49:34.519765Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fe8c4457455e3a5","initial-advertise-peer-urls":["https://192.168.39.200:2380"],"listen-peer-urls":["https://192.168.39.200:2380"],"advertise-client-urls":["https://192.168.39.200:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.200:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-20T00:49:35.713607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-20T00:49:35.713708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-20T00:49:35.713783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 received MsgPreVoteResp from fe8c4457455e3a5 at term 2"}
	{"level":"info","ts":"2024-04-20T00:49:35.713819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became candidate at term 3"}
	{"level":"info","ts":"2024-04-20T00:49:35.713844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 received MsgVoteResp from fe8c4457455e3a5 at term 3"}
	{"level":"info","ts":"2024-04-20T00:49:35.713872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became leader at term 3"}
	{"level":"info","ts":"2024-04-20T00:49:35.713897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe8c4457455e3a5 elected leader fe8c4457455e3a5 at term 3"}
	{"level":"info","ts":"2024-04-20T00:49:35.723194Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-20T00:49:35.725579Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T00:49:35.72564Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T00:49:35.72748Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.200:2379"}
	{"level":"info","ts":"2024-04-20T00:49:35.727949Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T00:49:35.730066Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-20T00:49:35.72302Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fe8c4457455e3a5","local-member-attributes":"{Name:multinode-059001 ClientURLs:[https://192.168.39.200:2379]}","request-path":"/0/members/fe8c4457455e3a5/attributes","cluster-id":"1d37198946ef4128","publish-timeout":"7s"}
	
	
	==> etcd [e6b41406ce7bb57c290c09411bc7850ed947848da5b369d197c7de10f99cc175] <==
	{"level":"info","ts":"2024-04-20T00:43:19.037072Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:43:19.037093Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:43:19.037696Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-20T00:43:19.038672Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.200:2379"}
	{"level":"info","ts":"2024-04-20T00:44:18.303737Z","caller":"traceutil/trace.go:171","msg":"trace[2075837120] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"156.713872ms","start":"2024-04-20T00:44:18.146991Z","end":"2024-04-20T00:44:18.303705Z","steps":["trace[2075837120] 'process raft request'  (duration: 114.461721ms)","trace[2075837120] 'compare'  (duration: 41.874534ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-20T00:44:18.303937Z","caller":"traceutil/trace.go:171","msg":"trace[494060433] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"121.115523ms","start":"2024-04-20T00:44:18.182814Z","end":"2024-04-20T00:44:18.30393Z","steps":["trace[494060433] 'process raft request'  (duration: 120.611625ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:45:04.089959Z","caller":"traceutil/trace.go:171","msg":"trace[1502393717] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"106.066007ms","start":"2024-04-20T00:45:03.983863Z","end":"2024-04-20T00:45:04.089929Z","steps":["trace[1502393717] 'process raft request'  (duration: 105.922806ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:45:04.227931Z","caller":"traceutil/trace.go:171","msg":"trace[2023883580] linearizableReadLoop","detail":"{readStateIndex:615; appliedIndex:614; }","duration":"126.819005ms","start":"2024-04-20T00:45:04.101098Z","end":"2024-04-20T00:45:04.227917Z","steps":["trace[2023883580] 'read index received'  (duration: 25.053873ms)","trace[2023883580] 'applied index is now lower than readState.Index'  (duration: 101.764653ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-20T00:45:04.228054Z","caller":"traceutil/trace.go:171","msg":"trace[582215466] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"130.194148ms","start":"2024-04-20T00:45:04.09785Z","end":"2024-04-20T00:45:04.228045Z","steps":["trace[582215466] 'process raft request'  (duration: 125.225363ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:45:04.228328Z","caller":"traceutil/trace.go:171","msg":"trace[759209653] transaction","detail":"{read_only:false; number_of_response:1; response_revision:585; }","duration":"127.165677ms","start":"2024-04-20T00:45:04.101151Z","end":"2024-04-20T00:45:04.228317Z","steps":["trace[759209653] 'process raft request'  (duration: 126.722624ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:45:04.228353Z","caller":"traceutil/trace.go:171","msg":"trace[1411647333] transaction","detail":"{read_only:false; number_of_response:1; response_revision:585; }","duration":"126.022079ms","start":"2024-04-20T00:45:04.102327Z","end":"2024-04-20T00:45:04.22835Z","steps":["trace[1411647333] 'process raft request'  (duration: 125.571033ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:45:04.228497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.290855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-20T00:45:04.23043Z","caller":"traceutil/trace.go:171","msg":"trace[989438498] range","detail":"{range_begin:/registry/limitranges/kube-system/; range_end:/registry/limitranges/kube-system0; response_count:0; response_revision:585; }","duration":"129.34178ms","start":"2024-04-20T00:45:04.101071Z","end":"2024-04-20T00:45:04.230412Z","steps":["trace[989438498] 'agreement among raft nodes before linearized reading'  (duration: 127.293907ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:45:04.228729Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.918771ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-059001-m03\" ","response":"range_response_count:1 size:2130"}
	{"level":"info","ts":"2024-04-20T00:45:04.2309Z","caller":"traceutil/trace.go:171","msg":"trace[319444847] range","detail":"{range_begin:/registry/minions/multinode-059001-m03; range_end:; response_count:1; response_revision:585; }","duration":"127.114566ms","start":"2024-04-20T00:45:04.103778Z","end":"2024-04-20T00:45:04.230893Z","steps":["trace[319444847] 'agreement among raft nodes before linearized reading'  (duration: 124.923285ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:47:57.927814Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-20T00:47:57.928064Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-059001","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.200:2380"],"advertise-client-urls":["https://192.168.39.200:2379"]}
	{"level":"warn","ts":"2024-04-20T00:47:57.928289Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-20T00:47:57.928402Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-20T00:47:58.021163Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.200:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-20T00:47:58.021222Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.200:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-20T00:47:58.022787Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"fe8c4457455e3a5","current-leader-member-id":"fe8c4457455e3a5"}
	{"level":"info","ts":"2024-04-20T00:47:58.025275Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-04-20T00:47:58.025457Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-04-20T00:47:58.025493Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-059001","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.200:2380"],"advertise-client-urls":["https://192.168.39.200:2379"]}
	
	
	==> kernel <==
	 00:50:59 up 8 min,  0 users,  load average: 0.43, 0.42, 0.24
	Linux multinode-059001 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [278b79bc7d7b5493e7659e02415efc2edf732ce4e95961ab096cf68068cb2c95] <==
	I0420 00:47:09.267090       1 main.go:250] Node multinode-059001-m03 has CIDR [10.244.3.0/24] 
	I0420 00:47:19.280953       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:47:19.281097       1 main.go:227] handling current node
	I0420 00:47:19.281120       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:47:19.281138       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:47:19.281270       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0420 00:47:19.281294       1 main.go:250] Node multinode-059001-m03 has CIDR [10.244.3.0/24] 
	I0420 00:47:29.286576       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:47:29.286687       1 main.go:227] handling current node
	I0420 00:47:29.286716       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:47:29.286735       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:47:29.286896       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0420 00:47:29.286921       1 main.go:250] Node multinode-059001-m03 has CIDR [10.244.3.0/24] 
	I0420 00:47:39.293354       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:47:39.293593       1 main.go:227] handling current node
	I0420 00:47:39.293681       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:47:39.293708       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:47:39.293853       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0420 00:47:39.293873       1 main.go:250] Node multinode-059001-m03 has CIDR [10.244.3.0/24] 
	I0420 00:47:49.300018       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:47:49.300070       1 main.go:227] handling current node
	I0420 00:47:49.300080       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:47:49.300096       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:47:49.300686       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0420 00:47:49.300772       1 main.go:250] Node multinode-059001-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [b75d3e908c6a1d767bee0f970657c6dd2ec7c785094ee7e8174e8b6bead9eb35] <==
	I0420 00:50:09.985878       1 main.go:250] Node multinode-059001-m03 has CIDR [10.244.3.0/24] 
	I0420 00:50:19.996915       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:50:19.997223       1 main.go:227] handling current node
	I0420 00:50:19.997296       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:50:19.997350       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:50:19.997663       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0420 00:50:19.997735       1 main.go:250] Node multinode-059001-m03 has CIDR [10.244.3.0/24] 
	I0420 00:50:30.011650       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:50:30.011718       1 main.go:227] handling current node
	I0420 00:50:30.011744       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:50:30.011750       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:50:30.011859       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0420 00:50:30.011893       1 main.go:250] Node multinode-059001-m03 has CIDR [10.244.3.0/24] 
	I0420 00:50:40.038485       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:50:40.038673       1 main.go:227] handling current node
	I0420 00:50:40.038706       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:50:40.038725       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:50:40.038835       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0420 00:50:40.038855       1 main.go:250] Node multinode-059001-m03 has CIDR [10.244.3.0/24] 
	I0420 00:50:50.052276       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:50:50.052392       1 main.go:227] handling current node
	I0420 00:50:50.052495       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:50:50.052605       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:50:50.052884       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0420 00:50:50.053020       1 main.go:250] Node multinode-059001-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [339a729cde4f15511548279f70978ed3269d7198f64ba32a003790f3bb2bd1eb] <==
	W0420 00:47:57.955107       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955169       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955219       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955271       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955321       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955368       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955411       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955468       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955517       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955697       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955758       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955805       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955859       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955922       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955974       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956018       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956145       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956197       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956242       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956293       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956336       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956380       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956427       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956471       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.957744       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [cc8dc5eb92c25a81376e5bc22d48ea950cfab5d2a9e85631f1f9dce9014b8ec2] <==
	I0420 00:49:37.239501       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0420 00:49:37.247700       1 aggregator.go:165] initial CRD sync complete...
	I0420 00:49:37.247738       1 autoregister_controller.go:141] Starting autoregister controller
	I0420 00:49:37.247745       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0420 00:49:37.247750       1 cache.go:39] Caches are synced for autoregister controller
	I0420 00:49:37.256964       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0420 00:49:37.265784       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0420 00:49:37.265821       1 policy_source.go:224] refreshing policies
	I0420 00:49:37.334036       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0420 00:49:37.334177       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0420 00:49:37.334214       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0420 00:49:37.334412       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0420 00:49:37.342101       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0420 00:49:37.352712       1 shared_informer.go:320] Caches are synced for configmaps
	I0420 00:49:37.352789       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0420 00:49:37.363627       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0420 00:49:37.378284       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0420 00:49:38.159063       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0420 00:49:39.676206       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0420 00:49:39.845794       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0420 00:49:39.873881       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0420 00:49:39.960050       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0420 00:49:39.971147       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0420 00:49:50.278264       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0420 00:49:50.328180       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [06122a5f65bc233da5812210b7739fd4d498d9c11f7b786cff2e2574315b535b] <==
	I0420 00:49:50.679927       1 shared_informer.go:320] Caches are synced for garbage collector
	I0420 00:49:50.680009       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0420 00:50:15.112876       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.613373ms"
	I0420 00:50:15.126198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.849415ms"
	I0420 00:50:15.126471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.38µs"
	I0420 00:50:15.137710       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.7µs"
	I0420 00:50:19.538489       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-059001-m02\" does not exist"
	I0420 00:50:19.552015       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-059001-m02" podCIDRs=["10.244.1.0/24"]
	I0420 00:50:20.984495       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.999µs"
	I0420 00:50:21.455105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.323µs"
	I0420 00:50:21.484101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.536µs"
	I0420 00:50:21.492916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.552µs"
	I0420 00:50:21.502048       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.905µs"
	I0420 00:50:21.511453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.48µs"
	I0420 00:50:21.516994       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.517µs"
	I0420 00:50:26.990003       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:50:27.008756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.428µs"
	I0420 00:50:27.024679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.737µs"
	I0420 00:50:28.922914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.386877ms"
	I0420 00:50:28.923023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.556µs"
	I0420 00:50:46.607357       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:50:47.729476       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:50:47.730233       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-059001-m03\" does not exist"
	I0420 00:50:47.749261       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-059001-m03" podCIDRs=["10.244.2.0/24"]
	I0420 00:50:55.282187       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	
	
	==> kube-controller-manager [81d365f1385c877c7c0e983fc2fcdafa619322c001fce172d3d29450e5d3d53c] <==
	I0420 00:44:18.312629       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-059001-m02\" does not exist"
	I0420 00:44:18.367629       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-059001-m02" podCIDRs=["10.244.1.0/24"]
	I0420 00:44:20.587140       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-059001-m02"
	I0420 00:44:26.402499       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:44:28.692891       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.409928ms"
	I0420 00:44:28.733325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.255008ms"
	I0420 00:44:28.752956       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.245075ms"
	I0420 00:44:28.753058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.3µs"
	I0420 00:44:30.641246       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.023818ms"
	I0420 00:44:30.642319       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.451µs"
	I0420 00:44:31.200469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.951875ms"
	I0420 00:44:31.201327       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.449µs"
	I0420 00:45:04.097960       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-059001-m03\" does not exist"
	I0420 00:45:04.098062       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:45:04.256007       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-059001-m03" podCIDRs=["10.244.2.0/24"]
	I0420 00:45:05.606169       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-059001-m03"
	I0420 00:45:12.862204       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:45:44.174863       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:45:45.266896       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:45:45.267238       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-059001-m03\" does not exist"
	I0420 00:45:45.290204       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-059001-m03" podCIDRs=["10.244.3.0/24"]
	I0420 00:45:51.455304       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:46:30.669911       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m03"
	I0420 00:46:30.718977       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.298661ms"
	I0420 00:46:30.719643       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="135.718µs"
	
	
	==> kube-proxy [0cdf1f27fc4a775bdb1bd07aca352e8375ad5df4a0ac4f9844f6731ab60ba0fa] <==
	I0420 00:43:38.521708       1 server_linux.go:69] "Using iptables proxy"
	I0420 00:43:38.529870       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.200"]
	I0420 00:43:38.585053       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 00:43:38.585089       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 00:43:38.585104       1 server_linux.go:165] "Using iptables Proxier"
	I0420 00:43:38.588245       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 00:43:38.588489       1 server.go:872] "Version info" version="v1.30.0"
	I0420 00:43:38.588635       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:43:38.591196       1 config.go:192] "Starting service config controller"
	I0420 00:43:38.591241       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 00:43:38.591282       1 config.go:101] "Starting endpoint slice config controller"
	I0420 00:43:38.591298       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 00:43:38.592148       1 config.go:319] "Starting node config controller"
	I0420 00:43:38.592196       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 00:43:38.691759       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 00:43:38.691825       1 shared_informer.go:320] Caches are synced for service config
	I0420 00:43:38.692309       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [1de6f3bfc27ff52e665aeea65f29380aacd63f3616dad1947aba138059bf66af] <==
	I0420 00:49:39.221428       1 server_linux.go:69] "Using iptables proxy"
	I0420 00:49:39.250353       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.200"]
	I0420 00:49:39.425721       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 00:49:39.425749       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 00:49:39.425764       1 server_linux.go:165] "Using iptables Proxier"
	I0420 00:49:39.437086       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 00:49:39.437253       1 server.go:872] "Version info" version="v1.30.0"
	I0420 00:49:39.437268       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:49:39.441018       1 config.go:192] "Starting service config controller"
	I0420 00:49:39.441035       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 00:49:39.441151       1 config.go:101] "Starting endpoint slice config controller"
	I0420 00:49:39.441157       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 00:49:39.452491       1 config.go:319] "Starting node config controller"
	I0420 00:49:39.453252       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 00:49:39.543860       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 00:49:39.543903       1 shared_informer.go:320] Caches are synced for service config
	I0420 00:49:39.554081       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b8e65c0c15cef8d42afec5611dd88b24133e9f162cd54535518c9f25729dcfc7] <==
	E0420 00:43:20.404346       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0420 00:43:20.403314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 00:43:20.404941       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 00:43:20.403603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0420 00:43:20.405070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0420 00:43:20.404050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0420 00:43:20.405215       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0420 00:43:20.404101       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0420 00:43:20.405340       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0420 00:43:21.231996       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0420 00:43:21.232105       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0420 00:43:21.410618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0420 00:43:21.411653       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0420 00:43:21.513679       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0420 00:43:21.513737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0420 00:43:21.521173       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0420 00:43:21.521225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0420 00:43:21.555077       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0420 00:43:21.555139       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0420 00:43:21.619679       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0420 00:43:21.619733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0420 00:43:21.665241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0420 00:43:21.665293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0420 00:43:21.992874       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0420 00:47:57.936818       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e2f78f6b91aa74df6a71fda29b2aa790b8049ec4615015da4cbff4961fca992a] <==
	I0420 00:49:34.962140       1 serving.go:380] Generated self-signed cert in-memory
	W0420 00:49:37.239991       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0420 00:49:37.243632       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 00:49:37.243782       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0420 00:49:37.243812       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0420 00:49:37.263889       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0420 00:49:37.264760       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:49:37.267032       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0420 00:49:37.267477       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0420 00:49:37.267940       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0420 00:49:37.267784       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0420 00:49:37.368245       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 20 00:49:34 multinode-059001 kubelet[3061]: E0420 00:49:34.048635    3061 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	Apr 20 00:49:34 multinode-059001 kubelet[3061]: I0420 00:49:34.602037    3061 kubelet_node_status.go:73] "Attempting to register node" node="multinode-059001"
	Apr 20 00:49:37 multinode-059001 kubelet[3061]: I0420 00:49:37.311620    3061 kubelet_node_status.go:112] "Node was previously registered" node="multinode-059001"
	Apr 20 00:49:37 multinode-059001 kubelet[3061]: I0420 00:49:37.312358    3061 kubelet_node_status.go:76] "Successfully registered node" node="multinode-059001"
	Apr 20 00:49:37 multinode-059001 kubelet[3061]: I0420 00:49:37.315157    3061 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 20 00:49:37 multinode-059001 kubelet[3061]: I0420 00:49:37.317001    3061 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.070958    3061 apiserver.go:52] "Watching apiserver"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.074182    3061 topology_manager.go:215] "Topology Admit Handler" podUID="dc879522-987c-4e38-bdb1-949a9d934334" podNamespace="kube-system" podName="kindnet-nrhgt"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.074423    3061 topology_manager.go:215] "Topology Admit Handler" podUID="8b0e9030-0d37-40b8-bb06-621b526ca289" podNamespace="kube-system" podName="coredns-7db6d8ff4d-78rrw"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.074633    3061 topology_manager.go:215] "Topology Admit Handler" podUID="64ab7435-a9ee-432d-8b87-58c3e4c7a147" podNamespace="kube-system" podName="kube-proxy-blctg"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.074757    3061 topology_manager.go:215] "Topology Admit Handler" podUID="139b40e9-a1ec-4035-88a9-e382b2ee6293" podNamespace="kube-system" podName="storage-provisioner"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.074833    3061 topology_manager.go:215] "Topology Admit Handler" podUID="cecb2998-715e-4d88-bea0-1cbece396619" podNamespace="default" podName="busybox-fc5497c4f-xlthm"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.085988    3061 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.129985    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc879522-987c-4e38-bdb1-949a9d934334-lib-modules\") pod \"kindnet-nrhgt\" (UID: \"dc879522-987c-4e38-bdb1-949a9d934334\") " pod="kube-system/kindnet-nrhgt"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.130061    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dc879522-987c-4e38-bdb1-949a9d934334-cni-cfg\") pod \"kindnet-nrhgt\" (UID: \"dc879522-987c-4e38-bdb1-949a9d934334\") " pod="kube-system/kindnet-nrhgt"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.130081    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc879522-987c-4e38-bdb1-949a9d934334-xtables-lock\") pod \"kindnet-nrhgt\" (UID: \"dc879522-987c-4e38-bdb1-949a9d934334\") " pod="kube-system/kindnet-nrhgt"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.130135    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64ab7435-a9ee-432d-8b87-58c3e4c7a147-xtables-lock\") pod \"kube-proxy-blctg\" (UID: \"64ab7435-a9ee-432d-8b87-58c3e4c7a147\") " pod="kube-system/kube-proxy-blctg"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.130148    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64ab7435-a9ee-432d-8b87-58c3e4c7a147-lib-modules\") pod \"kube-proxy-blctg\" (UID: \"64ab7435-a9ee-432d-8b87-58c3e4c7a147\") " pod="kube-system/kube-proxy-blctg"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.130175    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/139b40e9-a1ec-4035-88a9-e382b2ee6293-tmp\") pod \"storage-provisioner\" (UID: \"139b40e9-a1ec-4035-88a9-e382b2ee6293\") " pod="kube-system/storage-provisioner"
	Apr 20 00:49:45 multinode-059001 kubelet[3061]: I0420 00:49:45.208322    3061 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 20 00:50:33 multinode-059001 kubelet[3061]: E0420 00:50:33.210126    3061 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:50:33 multinode-059001 kubelet[3061]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:50:33 multinode-059001 kubelet[3061]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:50:33 multinode-059001 kubelet[3061]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:50:33 multinode-059001 kubelet[3061]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 00:50:58.011280  113579 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18703-76456/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-059001 -n multinode-059001
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-059001 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (305.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 stop
E0420 00:51:14.705486   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-059001 stop: exit status 82 (2m0.483513707s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-059001-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-059001 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 status
E0420 00:53:11.657775   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-059001 status: exit status 3 (18.845785029s)

                                                
                                                
-- stdout --
	multinode-059001
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-059001-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 00:53:21.737664  114258 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.91:22: connect: no route to host
	E0420 00:53:21.737724  114258 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.91:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-059001 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-059001 -n multinode-059001
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-059001 logs -n 25: (1.631348589s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-059001 ssh -n                                                                 | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-059001 cp multinode-059001-m02:/home/docker/cp-test.txt                       | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001:/home/docker/cp-test_multinode-059001-m02_multinode-059001.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n                                                                 | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n multinode-059001 sudo cat                                       | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | /home/docker/cp-test_multinode-059001-m02_multinode-059001.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-059001 cp multinode-059001-m02:/home/docker/cp-test.txt                       | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m03:/home/docker/cp-test_multinode-059001-m02_multinode-059001-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n                                                                 | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n multinode-059001-m03 sudo cat                                   | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | /home/docker/cp-test_multinode-059001-m02_multinode-059001-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-059001 cp testdata/cp-test.txt                                                | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n                                                                 | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-059001 cp multinode-059001-m03:/home/docker/cp-test.txt                       | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2465559633/001/cp-test_multinode-059001-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n                                                                 | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-059001 cp multinode-059001-m03:/home/docker/cp-test.txt                       | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001:/home/docker/cp-test_multinode-059001-m03_multinode-059001.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n                                                                 | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n multinode-059001 sudo cat                                       | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | /home/docker/cp-test_multinode-059001-m03_multinode-059001.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-059001 cp multinode-059001-m03:/home/docker/cp-test.txt                       | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m02:/home/docker/cp-test_multinode-059001-m03_multinode-059001-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n                                                                 | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | multinode-059001-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-059001 ssh -n multinode-059001-m02 sudo cat                                   | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | /home/docker/cp-test_multinode-059001-m03_multinode-059001-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-059001 node stop m03                                                          | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	| node    | multinode-059001 node start                                                             | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC | 20 Apr 24 00:45 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-059001                                                                | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC |                     |
	| stop    | -p multinode-059001                                                                     | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC |                     |
	| start   | -p multinode-059001                                                                     | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:47 UTC | 20 Apr 24 00:50 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-059001                                                                | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC |                     |
	| node    | multinode-059001 node delete                                                            | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:51 UTC | 20 Apr 24 00:51 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-059001 stop                                                                   | multinode-059001 | jenkins | v1.33.0 | 20 Apr 24 00:51 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 00:47:57
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 00:47:57.013195  112536 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:47:57.013289  112536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:47:57.013299  112536 out.go:304] Setting ErrFile to fd 2...
	I0420 00:47:57.013303  112536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:47:57.013537  112536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:47:57.014076  112536 out.go:298] Setting JSON to false
	I0420 00:47:57.014958  112536 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":12624,"bootTime":1713561453,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 00:47:57.015010  112536 start.go:139] virtualization: kvm guest
	I0420 00:47:57.017488  112536 out.go:177] * [multinode-059001] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 00:47:57.019019  112536 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 00:47:57.020532  112536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 00:47:57.019028  112536 notify.go:220] Checking for updates...
	I0420 00:47:57.022226  112536 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 00:47:57.023628  112536 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:47:57.024855  112536 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 00:47:57.025999  112536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 00:47:57.027641  112536 config.go:182] Loaded profile config "multinode-059001": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:47:57.027764  112536 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 00:47:57.028378  112536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:47:57.028424  112536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:47:57.043243  112536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I0420 00:47:57.043784  112536 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:47:57.044427  112536 main.go:141] libmachine: Using API Version  1
	I0420 00:47:57.044449  112536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:47:57.044820  112536 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:47:57.044983  112536 main.go:141] libmachine: (multinode-059001) Calling .DriverName
	I0420 00:47:57.079002  112536 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 00:47:57.080323  112536 start.go:297] selected driver: kvm2
	I0420 00:47:57.080337  112536 start.go:901] validating driver "kvm2" against &{Name:multinode-059001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-059001 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.108 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio
-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:47:57.080522  112536 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 00:47:57.080963  112536 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 00:47:57.081049  112536 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 00:47:57.095226  112536 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 00:47:57.095844  112536 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:47:57.095913  112536 cni.go:84] Creating CNI manager for ""
	I0420 00:47:57.095930  112536 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0420 00:47:57.096002  112536 start.go:340] cluster config:
	{Name:multinode-059001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-059001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.108 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:fals
e metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:47:57.096162  112536 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 00:47:57.097860  112536 out.go:177] * Starting "multinode-059001" primary control-plane node in "multinode-059001" cluster
	I0420 00:47:57.099202  112536 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:47:57.099245  112536 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0420 00:47:57.099259  112536 cache.go:56] Caching tarball of preloaded images
	I0420 00:47:57.099357  112536 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 00:47:57.099373  112536 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 00:47:57.099483  112536 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/config.json ...
	I0420 00:47:57.099685  112536 start.go:360] acquireMachinesLock for multinode-059001: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 00:47:57.099729  112536 start.go:364] duration metric: took 24.489µs to acquireMachinesLock for "multinode-059001"
	I0420 00:47:57.099743  112536 start.go:96] Skipping create...Using existing machine configuration
	I0420 00:47:57.099750  112536 fix.go:54] fixHost starting: 
	I0420 00:47:57.100049  112536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:47:57.100085  112536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:47:57.113268  112536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44351
	I0420 00:47:57.113791  112536 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:47:57.114342  112536 main.go:141] libmachine: Using API Version  1
	I0420 00:47:57.114366  112536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:47:57.114628  112536 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:47:57.114806  112536 main.go:141] libmachine: (multinode-059001) Calling .DriverName
	I0420 00:47:57.115010  112536 main.go:141] libmachine: (multinode-059001) Calling .GetState
	I0420 00:47:57.116540  112536 fix.go:112] recreateIfNeeded on multinode-059001: state=Running err=<nil>
	W0420 00:47:57.116558  112536 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 00:47:57.118350  112536 out.go:177] * Updating the running kvm2 "multinode-059001" VM ...
	I0420 00:47:57.119691  112536 machine.go:94] provisionDockerMachine start ...
	I0420 00:47:57.119713  112536 main.go:141] libmachine: (multinode-059001) Calling .DriverName
	I0420 00:47:57.119901  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:47:57.122271  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.122660  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:47:57.122696  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.122823  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:47:57.122985  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.123151  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.123290  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:47:57.123450  112536 main.go:141] libmachine: Using SSH client type: native
	I0420 00:47:57.123694  112536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0420 00:47:57.123708  112536 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 00:47:57.230886  112536 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-059001
	
	I0420 00:47:57.230927  112536 main.go:141] libmachine: (multinode-059001) Calling .GetMachineName
	I0420 00:47:57.231179  112536 buildroot.go:166] provisioning hostname "multinode-059001"
	I0420 00:47:57.231214  112536 main.go:141] libmachine: (multinode-059001) Calling .GetMachineName
	I0420 00:47:57.231417  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:47:57.234084  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.234411  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:47:57.234443  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.234575  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:47:57.234753  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.234901  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.235080  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:47:57.235266  112536 main.go:141] libmachine: Using SSH client type: native
	I0420 00:47:57.235445  112536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0420 00:47:57.235463  112536 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-059001 && echo "multinode-059001" | sudo tee /etc/hostname
	I0420 00:47:57.359531  112536 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-059001
	
	I0420 00:47:57.359556  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:47:57.362264  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.362637  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:47:57.362749  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.362912  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:47:57.363125  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.363295  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.363442  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:47:57.363602  112536 main.go:141] libmachine: Using SSH client type: native
	I0420 00:47:57.363813  112536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0420 00:47:57.363837  112536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-059001' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-059001/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-059001' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 00:47:57.470936  112536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:47:57.470964  112536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 00:47:57.470992  112536 buildroot.go:174] setting up certificates
	I0420 00:47:57.471002  112536 provision.go:84] configureAuth start
	I0420 00:47:57.471011  112536 main.go:141] libmachine: (multinode-059001) Calling .GetMachineName
	I0420 00:47:57.471329  112536 main.go:141] libmachine: (multinode-059001) Calling .GetIP
	I0420 00:47:57.473952  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.474307  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:47:57.474328  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.474462  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:47:57.476685  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.477185  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:47:57.477237  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.477277  112536 provision.go:143] copyHostCerts
	I0420 00:47:57.477350  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:47:57.477391  112536 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 00:47:57.477402  112536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 00:47:57.477467  112536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 00:47:57.477592  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:47:57.477614  112536 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 00:47:57.477619  112536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 00:47:57.477649  112536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 00:47:57.477716  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:47:57.477731  112536 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 00:47:57.477735  112536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 00:47:57.477756  112536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 00:47:57.477865  112536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.multinode-059001 san=[127.0.0.1 192.168.39.200 localhost minikube multinode-059001]
	I0420 00:47:57.612193  112536 provision.go:177] copyRemoteCerts
	I0420 00:47:57.612258  112536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 00:47:57.612284  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:47:57.615039  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.615523  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:47:57.615566  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.615740  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:47:57.615924  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.616126  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:47:57.616251  112536 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/multinode-059001/id_rsa Username:docker}
	I0420 00:47:57.704307  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0420 00:47:57.704386  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 00:47:57.735015  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0420 00:47:57.735094  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0420 00:47:57.762111  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0420 00:47:57.762173  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 00:47:57.789401  112536 provision.go:87] duration metric: took 318.385376ms to configureAuth
	I0420 00:47:57.789427  112536 buildroot.go:189] setting minikube options for container-runtime
	I0420 00:47:57.789687  112536 config.go:182] Loaded profile config "multinode-059001": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:47:57.789777  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:47:57.792332  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.792776  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:47:57.792806  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:47:57.792980  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:47:57.793183  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.793386  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:47:57.793560  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:47:57.793736  112536 main.go:141] libmachine: Using SSH client type: native
	I0420 00:47:57.793950  112536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0420 00:47:57.793966  112536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 00:49:28.684701  112536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 00:49:28.684747  112536 machine.go:97] duration metric: took 1m31.565040203s to provisionDockerMachine
	I0420 00:49:28.684764  112536 start.go:293] postStartSetup for "multinode-059001" (driver="kvm2")
	I0420 00:49:28.684831  112536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 00:49:28.684863  112536 main.go:141] libmachine: (multinode-059001) Calling .DriverName
	I0420 00:49:28.685294  112536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 00:49:28.685343  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:49:28.688944  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.689409  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:49:28.689442  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.689573  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:49:28.689797  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:49:28.689972  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:49:28.690119  112536 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/multinode-059001/id_rsa Username:docker}
	I0420 00:49:28.774923  112536 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 00:49:28.779738  112536 command_runner.go:130] > NAME=Buildroot
	I0420 00:49:28.779761  112536 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0420 00:49:28.779774  112536 command_runner.go:130] > ID=buildroot
	I0420 00:49:28.779780  112536 command_runner.go:130] > VERSION_ID=2023.02.9
	I0420 00:49:28.779785  112536 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0420 00:49:28.779819  112536 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 00:49:28.779837  112536 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 00:49:28.779894  112536 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 00:49:28.779996  112536 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 00:49:28.780011  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /etc/ssl/certs/837422.pem
	I0420 00:49:28.780135  112536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 00:49:28.790922  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:49:28.819152  112536 start.go:296] duration metric: took 134.372664ms for postStartSetup
	I0420 00:49:28.819290  112536 fix.go:56] duration metric: took 1m31.719519569s for fixHost
	I0420 00:49:28.819321  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:49:28.822162  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.822550  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:49:28.822589  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.822712  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:49:28.822892  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:49:28.823037  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:49:28.823273  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:49:28.823482  112536 main.go:141] libmachine: Using SSH client type: native
	I0420 00:49:28.823688  112536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0420 00:49:28.823702  112536 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 00:49:28.926255  112536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713574168.903910655
	
	I0420 00:49:28.926277  112536 fix.go:216] guest clock: 1713574168.903910655
	I0420 00:49:28.926287  112536 fix.go:229] Guest: 2024-04-20 00:49:28.903910655 +0000 UTC Remote: 2024-04-20 00:49:28.819301079 +0000 UTC m=+91.854799305 (delta=84.609576ms)
	I0420 00:49:28.926310  112536 fix.go:200] guest clock delta is within tolerance: 84.609576ms
	I0420 00:49:28.926317  112536 start.go:83] releasing machines lock for "multinode-059001", held for 1m31.826578397s
	I0420 00:49:28.926344  112536 main.go:141] libmachine: (multinode-059001) Calling .DriverName
	I0420 00:49:28.926640  112536 main.go:141] libmachine: (multinode-059001) Calling .GetIP
	I0420 00:49:28.929290  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.929701  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:49:28.929728  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.929882  112536 main.go:141] libmachine: (multinode-059001) Calling .DriverName
	I0420 00:49:28.930359  112536 main.go:141] libmachine: (multinode-059001) Calling .DriverName
	I0420 00:49:28.930554  112536 main.go:141] libmachine: (multinode-059001) Calling .DriverName
	I0420 00:49:28.930665  112536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 00:49:28.930697  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:49:28.930767  112536 ssh_runner.go:195] Run: cat /version.json
	I0420 00:49:28.930789  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:49:28.933539  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.933771  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.933901  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:49:28.933930  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.934051  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:49:28.934158  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:49:28.934178  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:28.934222  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:49:28.934358  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:49:28.934378  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:49:28.934584  112536 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/multinode-059001/id_rsa Username:docker}
	I0420 00:49:28.934601  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:49:28.934729  112536 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:49:28.934842  112536 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/multinode-059001/id_rsa Username:docker}
	I0420 00:49:29.010653  112536 command_runner.go:130] > {"iso_version": "v1.33.0", "kicbase_version": "v0.0.43-1713236840-18649", "minikube_version": "v1.33.0", "commit": "4bd203f0c710e7fdd30539846cf2bc6624a2556d"}
	I0420 00:49:29.010792  112536 ssh_runner.go:195] Run: systemctl --version
	I0420 00:49:29.035190  112536 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0420 00:49:29.035286  112536 command_runner.go:130] > systemd 252 (252)
	I0420 00:49:29.035316  112536 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0420 00:49:29.035386  112536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 00:49:29.208553  112536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0420 00:49:29.215884  112536 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0420 00:49:29.216372  112536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 00:49:29.216441  112536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 00:49:29.226739  112536 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0420 00:49:29.226771  112536 start.go:494] detecting cgroup driver to use...
	I0420 00:49:29.226841  112536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 00:49:29.244417  112536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 00:49:29.259510  112536 docker.go:217] disabling cri-docker service (if available) ...
	I0420 00:49:29.259556  112536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 00:49:29.274056  112536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 00:49:29.288252  112536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 00:49:29.434378  112536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 00:49:29.604516  112536 docker.go:233] disabling docker service ...
	I0420 00:49:29.604625  112536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 00:49:29.627883  112536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 00:49:29.642449  112536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 00:49:29.802096  112536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 00:49:29.950458  112536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 00:49:29.965707  112536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 00:49:29.987546  112536 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0420 00:49:29.988207  112536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 00:49:29.988260  112536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:49:30.000005  112536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 00:49:30.000085  112536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:49:30.011713  112536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:49:30.023295  112536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:49:30.034464  112536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 00:49:30.046038  112536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:49:30.057100  112536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:49:30.069776  112536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:49:30.080908  112536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 00:49:30.091008  112536 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0420 00:49:30.091084  112536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 00:49:30.100934  112536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:49:30.243280  112536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 00:49:30.491269  112536 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 00:49:30.491420  112536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 00:49:30.497797  112536 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0420 00:49:30.497825  112536 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0420 00:49:30.497834  112536 command_runner.go:130] > Device: 0,22	Inode: 1305        Links: 1
	I0420 00:49:30.497844  112536 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0420 00:49:30.497852  112536 command_runner.go:130] > Access: 2024-04-20 00:49:30.366609509 +0000
	I0420 00:49:30.497861  112536 command_runner.go:130] > Modify: 2024-04-20 00:49:30.366609509 +0000
	I0420 00:49:30.497870  112536 command_runner.go:130] > Change: 2024-04-20 00:49:30.366609509 +0000
	I0420 00:49:30.497876  112536 command_runner.go:130] >  Birth: -
	I0420 00:49:30.498074  112536 start.go:562] Will wait 60s for crictl version
	I0420 00:49:30.498139  112536 ssh_runner.go:195] Run: which crictl
	I0420 00:49:30.502699  112536 command_runner.go:130] > /usr/bin/crictl
	I0420 00:49:30.502769  112536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 00:49:30.549150  112536 command_runner.go:130] > Version:  0.1.0
	I0420 00:49:30.549187  112536 command_runner.go:130] > RuntimeName:  cri-o
	I0420 00:49:30.549194  112536 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0420 00:49:30.549201  112536 command_runner.go:130] > RuntimeApiVersion:  v1
	I0420 00:49:30.550443  112536 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 00:49:30.550539  112536 ssh_runner.go:195] Run: crio --version
	I0420 00:49:30.585405  112536 command_runner.go:130] > crio version 1.29.1
	I0420 00:49:30.585427  112536 command_runner.go:130] > Version:        1.29.1
	I0420 00:49:30.585433  112536 command_runner.go:130] > GitCommit:      unknown
	I0420 00:49:30.585437  112536 command_runner.go:130] > GitCommitDate:  unknown
	I0420 00:49:30.585442  112536 command_runner.go:130] > GitTreeState:   clean
	I0420 00:49:30.585447  112536 command_runner.go:130] > BuildDate:      2024-04-18T23:15:22Z
	I0420 00:49:30.585451  112536 command_runner.go:130] > GoVersion:      go1.21.6
	I0420 00:49:30.585455  112536 command_runner.go:130] > Compiler:       gc
	I0420 00:49:30.585459  112536 command_runner.go:130] > Platform:       linux/amd64
	I0420 00:49:30.585463  112536 command_runner.go:130] > Linkmode:       dynamic
	I0420 00:49:30.585468  112536 command_runner.go:130] > BuildTags:      
	I0420 00:49:30.585489  112536 command_runner.go:130] >   containers_image_ostree_stub
	I0420 00:49:30.585493  112536 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0420 00:49:30.585497  112536 command_runner.go:130] >   btrfs_noversion
	I0420 00:49:30.585502  112536 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0420 00:49:30.585506  112536 command_runner.go:130] >   libdm_no_deferred_remove
	I0420 00:49:30.585509  112536 command_runner.go:130] >   seccomp
	I0420 00:49:30.585513  112536 command_runner.go:130] > LDFlags:          unknown
	I0420 00:49:30.585517  112536 command_runner.go:130] > SeccompEnabled:   true
	I0420 00:49:30.585521  112536 command_runner.go:130] > AppArmorEnabled:  false
	I0420 00:49:30.586904  112536 ssh_runner.go:195] Run: crio --version
	I0420 00:49:30.619098  112536 command_runner.go:130] > crio version 1.29.1
	I0420 00:49:30.619123  112536 command_runner.go:130] > Version:        1.29.1
	I0420 00:49:30.619131  112536 command_runner.go:130] > GitCommit:      unknown
	I0420 00:49:30.619136  112536 command_runner.go:130] > GitCommitDate:  unknown
	I0420 00:49:30.619140  112536 command_runner.go:130] > GitTreeState:   clean
	I0420 00:49:30.619146  112536 command_runner.go:130] > BuildDate:      2024-04-18T23:15:22Z
	I0420 00:49:30.619150  112536 command_runner.go:130] > GoVersion:      go1.21.6
	I0420 00:49:30.619154  112536 command_runner.go:130] > Compiler:       gc
	I0420 00:49:30.619158  112536 command_runner.go:130] > Platform:       linux/amd64
	I0420 00:49:30.619162  112536 command_runner.go:130] > Linkmode:       dynamic
	I0420 00:49:30.619170  112536 command_runner.go:130] > BuildTags:      
	I0420 00:49:30.619174  112536 command_runner.go:130] >   containers_image_ostree_stub
	I0420 00:49:30.619180  112536 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0420 00:49:30.619184  112536 command_runner.go:130] >   btrfs_noversion
	I0420 00:49:30.619188  112536 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0420 00:49:30.619192  112536 command_runner.go:130] >   libdm_no_deferred_remove
	I0420 00:49:30.619196  112536 command_runner.go:130] >   seccomp
	I0420 00:49:30.619200  112536 command_runner.go:130] > LDFlags:          unknown
	I0420 00:49:30.619204  112536 command_runner.go:130] > SeccompEnabled:   true
	I0420 00:49:30.619208  112536 command_runner.go:130] > AppArmorEnabled:  false
	I0420 00:49:30.621079  112536 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 00:49:30.622498  112536 main.go:141] libmachine: (multinode-059001) Calling .GetIP
	I0420 00:49:30.625405  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:30.625784  112536 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:49:30.625824  112536 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:49:30.625996  112536 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0420 00:49:30.630857  112536 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0420 00:49:30.630962  112536 kubeadm.go:877] updating cluster {Name:multinode-059001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-059001 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.108 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 00:49:30.631157  112536 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:49:30.631223  112536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:49:30.675732  112536 command_runner.go:130] > {
	I0420 00:49:30.675762  112536 command_runner.go:130] >   "images": [
	I0420 00:49:30.675775  112536 command_runner.go:130] >     {
	I0420 00:49:30.675788  112536 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0420 00:49:30.675797  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.675808  112536 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0420 00:49:30.675815  112536 command_runner.go:130] >       ],
	I0420 00:49:30.675823  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.675837  112536 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0420 00:49:30.675852  112536 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0420 00:49:30.675857  112536 command_runner.go:130] >       ],
	I0420 00:49:30.675863  112536 command_runner.go:130] >       "size": "65291810",
	I0420 00:49:30.675868  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.675884  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.675898  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.675909  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.675919  112536 command_runner.go:130] >     },
	I0420 00:49:30.675929  112536 command_runner.go:130] >     {
	I0420 00:49:30.675941  112536 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0420 00:49:30.675950  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.675959  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0420 00:49:30.675967  112536 command_runner.go:130] >       ],
	I0420 00:49:30.675975  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.676053  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0420 00:49:30.676098  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0420 00:49:30.676119  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676127  112536 command_runner.go:130] >       "size": "1363676",
	I0420 00:49:30.676134  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.676155  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.676170  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.676178  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.676184  112536 command_runner.go:130] >     },
	I0420 00:49:30.676189  112536 command_runner.go:130] >     {
	I0420 00:49:30.676204  112536 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0420 00:49:30.676213  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.676224  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0420 00:49:30.676233  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676239  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.676265  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0420 00:49:30.676283  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0420 00:49:30.676292  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676299  112536 command_runner.go:130] >       "size": "31470524",
	I0420 00:49:30.676309  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.676315  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.676324  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.676329  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.676336  112536 command_runner.go:130] >     },
	I0420 00:49:30.676342  112536 command_runner.go:130] >     {
	I0420 00:49:30.676352  112536 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0420 00:49:30.676362  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.676371  112536 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0420 00:49:30.676380  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676387  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.676402  112536 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0420 00:49:30.676433  112536 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0420 00:49:30.676443  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676449  112536 command_runner.go:130] >       "size": "61245718",
	I0420 00:49:30.676456  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.676465  112536 command_runner.go:130] >       "username": "nonroot",
	I0420 00:49:30.676471  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.676480  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.676485  112536 command_runner.go:130] >     },
	I0420 00:49:30.676536  112536 command_runner.go:130] >     {
	I0420 00:49:30.676553  112536 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0420 00:49:30.676560  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.676568  112536 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0420 00:49:30.676578  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676583  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.676596  112536 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0420 00:49:30.676611  112536 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0420 00:49:30.676619  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676640  112536 command_runner.go:130] >       "size": "150779692",
	I0420 00:49:30.676651  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.676658  112536 command_runner.go:130] >         "value": "0"
	I0420 00:49:30.676673  112536 command_runner.go:130] >       },
	I0420 00:49:30.676688  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.676697  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.676703  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.676709  112536 command_runner.go:130] >     },
	I0420 00:49:30.676714  112536 command_runner.go:130] >     {
	I0420 00:49:30.676724  112536 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0420 00:49:30.676735  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.676743  112536 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0420 00:49:30.676751  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676758  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.676771  112536 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0420 00:49:30.676786  112536 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0420 00:49:30.676793  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676797  112536 command_runner.go:130] >       "size": "117609952",
	I0420 00:49:30.676803  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.676812  112536 command_runner.go:130] >         "value": "0"
	I0420 00:49:30.676819  112536 command_runner.go:130] >       },
	I0420 00:49:30.676829  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.676836  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.676845  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.676852  112536 command_runner.go:130] >     },
	I0420 00:49:30.676859  112536 command_runner.go:130] >     {
	I0420 00:49:30.676868  112536 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0420 00:49:30.676877  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.676885  112536 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0420 00:49:30.676894  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676901  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.676921  112536 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0420 00:49:30.676936  112536 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0420 00:49:30.676946  112536 command_runner.go:130] >       ],
	I0420 00:49:30.676952  112536 command_runner.go:130] >       "size": "112170310",
	I0420 00:49:30.676958  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.676968  112536 command_runner.go:130] >         "value": "0"
	I0420 00:49:30.676974  112536 command_runner.go:130] >       },
	I0420 00:49:30.676980  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.676997  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.677002  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.677006  112536 command_runner.go:130] >     },
	I0420 00:49:30.677010  112536 command_runner.go:130] >     {
	I0420 00:49:30.677019  112536 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0420 00:49:30.677025  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.677032  112536 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0420 00:49:30.677038  112536 command_runner.go:130] >       ],
	I0420 00:49:30.677045  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.677079  112536 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0420 00:49:30.677114  112536 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0420 00:49:30.677120  112536 command_runner.go:130] >       ],
	I0420 00:49:30.677129  112536 command_runner.go:130] >       "size": "85932953",
	I0420 00:49:30.677139  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.677146  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.677156  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.677162  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.677167  112536 command_runner.go:130] >     },
	I0420 00:49:30.677171  112536 command_runner.go:130] >     {
	I0420 00:49:30.677178  112536 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0420 00:49:30.677184  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.677192  112536 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0420 00:49:30.677198  112536 command_runner.go:130] >       ],
	I0420 00:49:30.677227  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.677246  112536 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0420 00:49:30.677258  112536 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0420 00:49:30.677265  112536 command_runner.go:130] >       ],
	I0420 00:49:30.677272  112536 command_runner.go:130] >       "size": "63026502",
	I0420 00:49:30.677282  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.677288  112536 command_runner.go:130] >         "value": "0"
	I0420 00:49:30.677296  112536 command_runner.go:130] >       },
	I0420 00:49:30.677303  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.677324  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.677331  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.677338  112536 command_runner.go:130] >     },
	I0420 00:49:30.677343  112536 command_runner.go:130] >     {
	I0420 00:49:30.677373  112536 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0420 00:49:30.677383  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.677389  112536 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0420 00:49:30.677399  112536 command_runner.go:130] >       ],
	I0420 00:49:30.677406  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.677420  112536 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0420 00:49:30.677433  112536 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0420 00:49:30.677442  112536 command_runner.go:130] >       ],
	I0420 00:49:30.677449  112536 command_runner.go:130] >       "size": "750414",
	I0420 00:49:30.677456  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.677460  112536 command_runner.go:130] >         "value": "65535"
	I0420 00:49:30.677465  112536 command_runner.go:130] >       },
	I0420 00:49:30.677472  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.677479  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.677488  112536 command_runner.go:130] >       "pinned": true
	I0420 00:49:30.677494  112536 command_runner.go:130] >     }
	I0420 00:49:30.677506  112536 command_runner.go:130] >   ]
	I0420 00:49:30.677511  112536 command_runner.go:130] > }
	I0420 00:49:30.677781  112536 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 00:49:30.677795  112536 crio.go:433] Images already preloaded, skipping extraction
	I0420 00:49:30.677848  112536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:49:30.716526  112536 command_runner.go:130] > {
	I0420 00:49:30.716556  112536 command_runner.go:130] >   "images": [
	I0420 00:49:30.716563  112536 command_runner.go:130] >     {
	I0420 00:49:30.716576  112536 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0420 00:49:30.716594  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.716607  112536 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0420 00:49:30.716613  112536 command_runner.go:130] >       ],
	I0420 00:49:30.716620  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.716640  112536 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0420 00:49:30.716651  112536 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0420 00:49:30.716661  112536 command_runner.go:130] >       ],
	I0420 00:49:30.716668  112536 command_runner.go:130] >       "size": "65291810",
	I0420 00:49:30.716675  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.716680  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.716705  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.716716  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.716722  112536 command_runner.go:130] >     },
	I0420 00:49:30.716727  112536 command_runner.go:130] >     {
	I0420 00:49:30.716739  112536 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0420 00:49:30.716749  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.716758  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0420 00:49:30.716767  112536 command_runner.go:130] >       ],
	I0420 00:49:30.716774  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.716788  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0420 00:49:30.716797  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0420 00:49:30.716804  112536 command_runner.go:130] >       ],
	I0420 00:49:30.716808  112536 command_runner.go:130] >       "size": "1363676",
	I0420 00:49:30.716814  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.716820  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.716827  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.716831  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.716837  112536 command_runner.go:130] >     },
	I0420 00:49:30.716841  112536 command_runner.go:130] >     {
	I0420 00:49:30.716849  112536 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0420 00:49:30.716855  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.716861  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0420 00:49:30.716867  112536 command_runner.go:130] >       ],
	I0420 00:49:30.716872  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.716882  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0420 00:49:30.716891  112536 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0420 00:49:30.716897  112536 command_runner.go:130] >       ],
	I0420 00:49:30.716901  112536 command_runner.go:130] >       "size": "31470524",
	I0420 00:49:30.716907  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.716911  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.716918  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.716922  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.716928  112536 command_runner.go:130] >     },
	I0420 00:49:30.716931  112536 command_runner.go:130] >     {
	I0420 00:49:30.716940  112536 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0420 00:49:30.716946  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.716956  112536 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0420 00:49:30.716962  112536 command_runner.go:130] >       ],
	I0420 00:49:30.716966  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.716976  112536 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0420 00:49:30.716995  112536 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0420 00:49:30.717001  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717005  112536 command_runner.go:130] >       "size": "61245718",
	I0420 00:49:30.717012  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.717016  112536 command_runner.go:130] >       "username": "nonroot",
	I0420 00:49:30.717024  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.717029  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.717033  112536 command_runner.go:130] >     },
	I0420 00:49:30.717038  112536 command_runner.go:130] >     {
	I0420 00:49:30.717046  112536 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0420 00:49:30.717053  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.717058  112536 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0420 00:49:30.717063  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717067  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.717076  112536 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0420 00:49:30.717089  112536 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0420 00:49:30.717097  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717104  112536 command_runner.go:130] >       "size": "150779692",
	I0420 00:49:30.717108  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.717111  112536 command_runner.go:130] >         "value": "0"
	I0420 00:49:30.717116  112536 command_runner.go:130] >       },
	I0420 00:49:30.717120  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.717126  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.717130  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.717136  112536 command_runner.go:130] >     },
	I0420 00:49:30.717140  112536 command_runner.go:130] >     {
	I0420 00:49:30.717145  112536 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0420 00:49:30.717152  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.717156  112536 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0420 00:49:30.717162  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717166  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.717176  112536 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0420 00:49:30.717191  112536 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0420 00:49:30.717196  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717200  112536 command_runner.go:130] >       "size": "117609952",
	I0420 00:49:30.717203  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.717210  112536 command_runner.go:130] >         "value": "0"
	I0420 00:49:30.717213  112536 command_runner.go:130] >       },
	I0420 00:49:30.717219  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.717223  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.717229  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.717233  112536 command_runner.go:130] >     },
	I0420 00:49:30.717238  112536 command_runner.go:130] >     {
	I0420 00:49:30.717245  112536 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0420 00:49:30.717251  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.717256  112536 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0420 00:49:30.717262  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717266  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.717276  112536 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0420 00:49:30.717285  112536 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0420 00:49:30.717291  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717297  112536 command_runner.go:130] >       "size": "112170310",
	I0420 00:49:30.717303  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.717317  112536 command_runner.go:130] >         "value": "0"
	I0420 00:49:30.717326  112536 command_runner.go:130] >       },
	I0420 00:49:30.717332  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.717341  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.717347  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.717351  112536 command_runner.go:130] >     },
	I0420 00:49:30.717356  112536 command_runner.go:130] >     {
	I0420 00:49:30.717362  112536 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0420 00:49:30.717369  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.717373  112536 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0420 00:49:30.717379  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717384  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.717406  112536 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0420 00:49:30.717416  112536 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0420 00:49:30.717420  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717431  112536 command_runner.go:130] >       "size": "85932953",
	I0420 00:49:30.717437  112536 command_runner.go:130] >       "uid": null,
	I0420 00:49:30.717441  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.717448  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.717451  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.717457  112536 command_runner.go:130] >     },
	I0420 00:49:30.717461  112536 command_runner.go:130] >     {
	I0420 00:49:30.717471  112536 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0420 00:49:30.717480  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.717492  112536 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0420 00:49:30.717501  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717510  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.717524  112536 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0420 00:49:30.717539  112536 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0420 00:49:30.717547  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717556  112536 command_runner.go:130] >       "size": "63026502",
	I0420 00:49:30.717565  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.717574  112536 command_runner.go:130] >         "value": "0"
	I0420 00:49:30.717586  112536 command_runner.go:130] >       },
	I0420 00:49:30.717600  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.717606  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.717610  112536 command_runner.go:130] >       "pinned": false
	I0420 00:49:30.717616  112536 command_runner.go:130] >     },
	I0420 00:49:30.717620  112536 command_runner.go:130] >     {
	I0420 00:49:30.717628  112536 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0420 00:49:30.717632  112536 command_runner.go:130] >       "repoTags": [
	I0420 00:49:30.717639  112536 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0420 00:49:30.717642  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717649  112536 command_runner.go:130] >       "repoDigests": [
	I0420 00:49:30.717656  112536 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0420 00:49:30.717668  112536 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0420 00:49:30.717674  112536 command_runner.go:130] >       ],
	I0420 00:49:30.717678  112536 command_runner.go:130] >       "size": "750414",
	I0420 00:49:30.717684  112536 command_runner.go:130] >       "uid": {
	I0420 00:49:30.717688  112536 command_runner.go:130] >         "value": "65535"
	I0420 00:49:30.717692  112536 command_runner.go:130] >       },
	I0420 00:49:30.717705  112536 command_runner.go:130] >       "username": "",
	I0420 00:49:30.717711  112536 command_runner.go:130] >       "spec": null,
	I0420 00:49:30.717715  112536 command_runner.go:130] >       "pinned": true
	I0420 00:49:30.717721  112536 command_runner.go:130] >     }
	I0420 00:49:30.717724  112536 command_runner.go:130] >   ]
	I0420 00:49:30.717728  112536 command_runner.go:130] > }
	I0420 00:49:30.717835  112536 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 00:49:30.717847  112536 cache_images.go:84] Images are preloaded, skipping loading
	I0420 00:49:30.717854  112536 kubeadm.go:928] updating node { 192.168.39.200 8443 v1.30.0 crio true true} ...
	I0420 00:49:30.718011  112536 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-059001 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-059001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 00:49:30.718099  112536 ssh_runner.go:195] Run: crio config
	I0420 00:49:30.758394  112536 command_runner.go:130] ! time="2024-04-20 00:49:30.736041994Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0420 00:49:30.769867  112536 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0420 00:49:30.777771  112536 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0420 00:49:30.777797  112536 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0420 00:49:30.777808  112536 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0420 00:49:30.777813  112536 command_runner.go:130] > #
	I0420 00:49:30.777822  112536 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0420 00:49:30.777832  112536 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0420 00:49:30.777842  112536 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0420 00:49:30.777863  112536 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0420 00:49:30.777872  112536 command_runner.go:130] > # reload'.
	I0420 00:49:30.777888  112536 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0420 00:49:30.777901  112536 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0420 00:49:30.777921  112536 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0420 00:49:30.777933  112536 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0420 00:49:30.777942  112536 command_runner.go:130] > [crio]
	I0420 00:49:30.777954  112536 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0420 00:49:30.777965  112536 command_runner.go:130] > # containers images, in this directory.
	I0420 00:49:30.777976  112536 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0420 00:49:30.777995  112536 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0420 00:49:30.778006  112536 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0420 00:49:30.778020  112536 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0420 00:49:30.778027  112536 command_runner.go:130] > # imagestore = ""
	I0420 00:49:30.778034  112536 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0420 00:49:30.778042  112536 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0420 00:49:30.778046  112536 command_runner.go:130] > storage_driver = "overlay"
	I0420 00:49:30.778055  112536 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0420 00:49:30.778060  112536 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0420 00:49:30.778064  112536 command_runner.go:130] > storage_option = [
	I0420 00:49:30.778068  112536 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0420 00:49:30.778071  112536 command_runner.go:130] > ]
	I0420 00:49:30.778080  112536 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0420 00:49:30.778086  112536 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0420 00:49:30.778093  112536 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0420 00:49:30.778098  112536 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0420 00:49:30.778106  112536 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0420 00:49:30.778114  112536 command_runner.go:130] > # always happen on a node reboot
	I0420 00:49:30.778119  112536 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0420 00:49:30.778139  112536 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0420 00:49:30.778148  112536 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0420 00:49:30.778153  112536 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0420 00:49:30.778160  112536 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0420 00:49:30.778167  112536 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0420 00:49:30.778177  112536 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0420 00:49:30.778183  112536 command_runner.go:130] > # internal_wipe = true
	I0420 00:49:30.778191  112536 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0420 00:49:30.778198  112536 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0420 00:49:30.778207  112536 command_runner.go:130] > # internal_repair = false
	I0420 00:49:30.778215  112536 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0420 00:49:30.778222  112536 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0420 00:49:30.778229  112536 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0420 00:49:30.778236  112536 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0420 00:49:30.778247  112536 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0420 00:49:30.778253  112536 command_runner.go:130] > [crio.api]
	I0420 00:49:30.778258  112536 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0420 00:49:30.778262  112536 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0420 00:49:30.778268  112536 command_runner.go:130] > # IP address on which the stream server will listen.
	I0420 00:49:30.778272  112536 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0420 00:49:30.778280  112536 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0420 00:49:30.778288  112536 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0420 00:49:30.778292  112536 command_runner.go:130] > # stream_port = "0"
	I0420 00:49:30.778300  112536 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0420 00:49:30.778304  112536 command_runner.go:130] > # stream_enable_tls = false
	I0420 00:49:30.778312  112536 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0420 00:49:30.778316  112536 command_runner.go:130] > # stream_idle_timeout = ""
	I0420 00:49:30.778325  112536 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0420 00:49:30.778331  112536 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0420 00:49:30.778337  112536 command_runner.go:130] > # minutes.
	I0420 00:49:30.778340  112536 command_runner.go:130] > # stream_tls_cert = ""
	I0420 00:49:30.778349  112536 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0420 00:49:30.778356  112536 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0420 00:49:30.778363  112536 command_runner.go:130] > # stream_tls_key = ""
	I0420 00:49:30.778369  112536 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0420 00:49:30.778377  112536 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0420 00:49:30.778396  112536 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0420 00:49:30.778402  112536 command_runner.go:130] > # stream_tls_ca = ""
	I0420 00:49:30.778410  112536 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0420 00:49:30.778417  112536 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0420 00:49:30.778424  112536 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0420 00:49:30.778436  112536 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0420 00:49:30.778444  112536 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0420 00:49:30.778451  112536 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0420 00:49:30.778458  112536 command_runner.go:130] > [crio.runtime]
	I0420 00:49:30.778468  112536 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0420 00:49:30.778479  112536 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0420 00:49:30.778487  112536 command_runner.go:130] > # "nofile=1024:2048"
	I0420 00:49:30.778500  112536 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0420 00:49:30.778510  112536 command_runner.go:130] > # default_ulimits = [
	I0420 00:49:30.778517  112536 command_runner.go:130] > # ]
	I0420 00:49:30.778526  112536 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0420 00:49:30.778535  112536 command_runner.go:130] > # no_pivot = false
	I0420 00:49:30.778551  112536 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0420 00:49:30.778564  112536 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0420 00:49:30.778575  112536 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0420 00:49:30.778587  112536 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0420 00:49:30.778598  112536 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0420 00:49:30.778611  112536 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0420 00:49:30.778618  112536 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0420 00:49:30.778623  112536 command_runner.go:130] > # Cgroup setting for conmon
	I0420 00:49:30.778629  112536 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0420 00:49:30.778640  112536 command_runner.go:130] > conmon_cgroup = "pod"
	I0420 00:49:30.778647  112536 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0420 00:49:30.778654  112536 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0420 00:49:30.778661  112536 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0420 00:49:30.778667  112536 command_runner.go:130] > conmon_env = [
	I0420 00:49:30.778673  112536 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0420 00:49:30.778679  112536 command_runner.go:130] > ]
	I0420 00:49:30.778684  112536 command_runner.go:130] > # Additional environment variables to set for all the
	I0420 00:49:30.778691  112536 command_runner.go:130] > # containers. These are overridden if set in the
	I0420 00:49:30.778696  112536 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0420 00:49:30.778703  112536 command_runner.go:130] > # default_env = [
	I0420 00:49:30.778706  112536 command_runner.go:130] > # ]
	I0420 00:49:30.778714  112536 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0420 00:49:30.778721  112536 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0420 00:49:30.778727  112536 command_runner.go:130] > # selinux = false
	I0420 00:49:30.778733  112536 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0420 00:49:30.778741  112536 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0420 00:49:30.778747  112536 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0420 00:49:30.778754  112536 command_runner.go:130] > # seccomp_profile = ""
	I0420 00:49:30.778767  112536 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0420 00:49:30.778775  112536 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0420 00:49:30.778781  112536 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0420 00:49:30.778787  112536 command_runner.go:130] > # which might increase security.
	I0420 00:49:30.778792  112536 command_runner.go:130] > # This option is currently deprecated,
	I0420 00:49:30.778797  112536 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0420 00:49:30.778804  112536 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0420 00:49:30.778810  112536 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0420 00:49:30.778819  112536 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0420 00:49:30.778825  112536 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0420 00:49:30.778835  112536 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0420 00:49:30.778842  112536 command_runner.go:130] > # This option supports live configuration reload.
	I0420 00:49:30.778846  112536 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0420 00:49:30.778854  112536 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0420 00:49:30.778858  112536 command_runner.go:130] > # the cgroup blockio controller.
	I0420 00:49:30.778863  112536 command_runner.go:130] > # blockio_config_file = ""
	I0420 00:49:30.778870  112536 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0420 00:49:30.778876  112536 command_runner.go:130] > # blockio parameters.
	I0420 00:49:30.778880  112536 command_runner.go:130] > # blockio_reload = false
	I0420 00:49:30.778887  112536 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0420 00:49:30.778893  112536 command_runner.go:130] > # irqbalance daemon.
	I0420 00:49:30.778898  112536 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0420 00:49:30.778907  112536 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0420 00:49:30.778913  112536 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0420 00:49:30.778922  112536 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0420 00:49:30.778927  112536 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0420 00:49:30.778935  112536 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0420 00:49:30.778940  112536 command_runner.go:130] > # This option supports live configuration reload.
	I0420 00:49:30.778947  112536 command_runner.go:130] > # rdt_config_file = ""
	I0420 00:49:30.778952  112536 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0420 00:49:30.778958  112536 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0420 00:49:30.778987  112536 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0420 00:49:30.778995  112536 command_runner.go:130] > # separate_pull_cgroup = ""
	I0420 00:49:30.779003  112536 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0420 00:49:30.779009  112536 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0420 00:49:30.779015  112536 command_runner.go:130] > # will be added.
	I0420 00:49:30.779130  112536 command_runner.go:130] > # default_capabilities = [
	I0420 00:49:30.779271  112536 command_runner.go:130] > # 	"CHOWN",
	I0420 00:49:30.779285  112536 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0420 00:49:30.779290  112536 command_runner.go:130] > # 	"FSETID",
	I0420 00:49:30.779296  112536 command_runner.go:130] > # 	"FOWNER",
	I0420 00:49:30.779302  112536 command_runner.go:130] > # 	"SETGID",
	I0420 00:49:30.779308  112536 command_runner.go:130] > # 	"SETUID",
	I0420 00:49:30.779315  112536 command_runner.go:130] > # 	"SETPCAP",
	I0420 00:49:30.779410  112536 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0420 00:49:30.779441  112536 command_runner.go:130] > # 	"KILL",
	I0420 00:49:30.779446  112536 command_runner.go:130] > # ]
	I0420 00:49:30.779463  112536 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0420 00:49:30.779487  112536 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0420 00:49:30.779498  112536 command_runner.go:130] > # add_inheritable_capabilities = false
	I0420 00:49:30.779514  112536 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0420 00:49:30.779528  112536 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0420 00:49:30.779535  112536 command_runner.go:130] > default_sysctls = [
	I0420 00:49:30.779542  112536 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0420 00:49:30.779547  112536 command_runner.go:130] > ]
	I0420 00:49:30.779560  112536 command_runner.go:130] > # List of devices on the host that a
	I0420 00:49:30.779570  112536 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0420 00:49:30.779576  112536 command_runner.go:130] > # allowed_devices = [
	I0420 00:49:30.779582  112536 command_runner.go:130] > # 	"/dev/fuse",
	I0420 00:49:30.779587  112536 command_runner.go:130] > # ]
	I0420 00:49:30.779600  112536 command_runner.go:130] > # List of additional devices. specified as
	I0420 00:49:30.779611  112536 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0420 00:49:30.779620  112536 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0420 00:49:30.779634  112536 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0420 00:49:30.779640  112536 command_runner.go:130] > # additional_devices = [
	I0420 00:49:30.779646  112536 command_runner.go:130] > # ]
	I0420 00:49:30.779654  112536 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0420 00:49:30.779660  112536 command_runner.go:130] > # cdi_spec_dirs = [
	I0420 00:49:30.779671  112536 command_runner.go:130] > # 	"/etc/cdi",
	I0420 00:49:30.779677  112536 command_runner.go:130] > # 	"/var/run/cdi",
	I0420 00:49:30.779683  112536 command_runner.go:130] > # ]
	I0420 00:49:30.779692  112536 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0420 00:49:30.779708  112536 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0420 00:49:30.779715  112536 command_runner.go:130] > # Defaults to false.
	I0420 00:49:30.779723  112536 command_runner.go:130] > # device_ownership_from_security_context = false
	I0420 00:49:30.779734  112536 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0420 00:49:30.779748  112536 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0420 00:49:30.779754  112536 command_runner.go:130] > # hooks_dir = [
	I0420 00:49:30.779762  112536 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0420 00:49:30.779767  112536 command_runner.go:130] > # ]
	I0420 00:49:30.779788  112536 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0420 00:49:30.779799  112536 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0420 00:49:30.779807  112536 command_runner.go:130] > # its default mounts from the following two files:
	I0420 00:49:30.779812  112536 command_runner.go:130] > #
	I0420 00:49:30.779826  112536 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0420 00:49:30.779837  112536 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0420 00:49:30.779853  112536 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0420 00:49:30.779858  112536 command_runner.go:130] > #
	I0420 00:49:30.779868  112536 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0420 00:49:30.779879  112536 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0420 00:49:30.779894  112536 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0420 00:49:30.779904  112536 command_runner.go:130] > #      only add mounts it finds in this file.
	I0420 00:49:30.779908  112536 command_runner.go:130] > #
	I0420 00:49:30.779918  112536 command_runner.go:130] > # default_mounts_file = ""
	I0420 00:49:30.779931  112536 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0420 00:49:30.779940  112536 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0420 00:49:30.779946  112536 command_runner.go:130] > pids_limit = 1024
	I0420 00:49:30.779960  112536 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0420 00:49:30.779968  112536 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0420 00:49:30.779977  112536 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0420 00:49:30.779994  112536 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0420 00:49:30.780000  112536 command_runner.go:130] > # log_size_max = -1
	I0420 00:49:30.780014  112536 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0420 00:49:30.780023  112536 command_runner.go:130] > # log_to_journald = false
	I0420 00:49:30.780034  112536 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0420 00:49:30.780049  112536 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0420 00:49:30.780067  112536 command_runner.go:130] > # Path to directory for container attach sockets.
	I0420 00:49:30.780080  112536 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0420 00:49:30.780090  112536 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0420 00:49:30.780104  112536 command_runner.go:130] > # bind_mount_prefix = ""
	I0420 00:49:30.780141  112536 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0420 00:49:30.780148  112536 command_runner.go:130] > # read_only = false
	I0420 00:49:30.780163  112536 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0420 00:49:30.780185  112536 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0420 00:49:30.780197  112536 command_runner.go:130] > # live configuration reload.
	I0420 00:49:30.780207  112536 command_runner.go:130] > # log_level = "info"
	I0420 00:49:30.780279  112536 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0420 00:49:30.780348  112536 command_runner.go:130] > # This option supports live configuration reload.
	I0420 00:49:30.780360  112536 command_runner.go:130] > # log_filter = ""
	I0420 00:49:30.780375  112536 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0420 00:49:30.780398  112536 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0420 00:49:30.780411  112536 command_runner.go:130] > # separated by comma.
	I0420 00:49:30.780423  112536 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0420 00:49:30.780435  112536 command_runner.go:130] > # uid_mappings = ""
	I0420 00:49:30.780445  112536 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0420 00:49:30.780457  112536 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0420 00:49:30.780472  112536 command_runner.go:130] > # separated by comma.
	I0420 00:49:30.780487  112536 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0420 00:49:30.780496  112536 command_runner.go:130] > # gid_mappings = ""
	I0420 00:49:30.780515  112536 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0420 00:49:30.780525  112536 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0420 00:49:30.780534  112536 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0420 00:49:30.780550  112536 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0420 00:49:30.780559  112536 command_runner.go:130] > # minimum_mappable_uid = -1
	I0420 00:49:30.780568  112536 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0420 00:49:30.780582  112536 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0420 00:49:30.780591  112536 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0420 00:49:30.780607  112536 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0420 00:49:30.780614  112536 command_runner.go:130] > # minimum_mappable_gid = -1
	I0420 00:49:30.780623  112536 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0420 00:49:30.780636  112536 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0420 00:49:30.780644  112536 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0420 00:49:30.780651  112536 command_runner.go:130] > # ctr_stop_timeout = 30
	I0420 00:49:30.780664  112536 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0420 00:49:30.780673  112536 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0420 00:49:30.780685  112536 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0420 00:49:30.780693  112536 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0420 00:49:30.780769  112536 command_runner.go:130] > drop_infra_ctr = false
	I0420 00:49:30.780779  112536 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0420 00:49:30.781010  112536 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0420 00:49:30.781035  112536 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0420 00:49:30.781044  112536 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0420 00:49:30.781059  112536 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0420 00:49:30.781067  112536 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0420 00:49:30.781079  112536 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0420 00:49:30.781091  112536 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0420 00:49:30.781101  112536 command_runner.go:130] > # shared_cpuset = ""
	I0420 00:49:30.781114  112536 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0420 00:49:30.781126  112536 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0420 00:49:30.781135  112536 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0420 00:49:30.781150  112536 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0420 00:49:30.781157  112536 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0420 00:49:30.781165  112536 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0420 00:49:30.781186  112536 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0420 00:49:30.781199  112536 command_runner.go:130] > # enable_criu_support = false
	I0420 00:49:30.781207  112536 command_runner.go:130] > # Enable/disable the generation of the container,
	I0420 00:49:30.781220  112536 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0420 00:49:30.781231  112536 command_runner.go:130] > # enable_pod_events = false
	I0420 00:49:30.781241  112536 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0420 00:49:30.781254  112536 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0420 00:49:30.781266  112536 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0420 00:49:30.781275  112536 command_runner.go:130] > # default_runtime = "runc"
	I0420 00:49:30.781287  112536 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0420 00:49:30.781302  112536 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0420 00:49:30.781332  112536 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0420 00:49:30.781344  112536 command_runner.go:130] > # creation as a file is not desired either.
	I0420 00:49:30.781360  112536 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0420 00:49:30.781371  112536 command_runner.go:130] > # the hostname is being managed dynamically.
	I0420 00:49:30.781382  112536 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0420 00:49:30.781388  112536 command_runner.go:130] > # ]
	I0420 00:49:30.781396  112536 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0420 00:49:30.781410  112536 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0420 00:49:30.781423  112536 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0420 00:49:30.781435  112536 command_runner.go:130] > # Each entry in the table should follow the format:
	I0420 00:49:30.781445  112536 command_runner.go:130] > #
	I0420 00:49:30.781456  112536 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0420 00:49:30.781465  112536 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0420 00:49:30.781519  112536 command_runner.go:130] > # runtime_type = "oci"
	I0420 00:49:30.781540  112536 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0420 00:49:30.781561  112536 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0420 00:49:30.781580  112536 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0420 00:49:30.781593  112536 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0420 00:49:30.781611  112536 command_runner.go:130] > # monitor_env = []
	I0420 00:49:30.781629  112536 command_runner.go:130] > # privileged_without_host_devices = false
	I0420 00:49:30.781656  112536 command_runner.go:130] > # allowed_annotations = []
	I0420 00:49:30.781669  112536 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0420 00:49:30.781677  112536 command_runner.go:130] > # Where:
	I0420 00:49:30.781683  112536 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0420 00:49:30.781692  112536 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0420 00:49:30.781698  112536 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0420 00:49:30.781704  112536 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0420 00:49:30.781712  112536 command_runner.go:130] > #   in $PATH.
	I0420 00:49:30.781717  112536 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0420 00:49:30.781722  112536 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0420 00:49:30.781728  112536 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0420 00:49:30.781734  112536 command_runner.go:130] > #   state.
	I0420 00:49:30.781741  112536 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0420 00:49:30.781749  112536 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0420 00:49:30.781757  112536 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0420 00:49:30.781763  112536 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0420 00:49:30.781770  112536 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0420 00:49:30.781779  112536 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0420 00:49:30.781784  112536 command_runner.go:130] > #   The currently recognized values are:
	I0420 00:49:30.781793  112536 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0420 00:49:30.781801  112536 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0420 00:49:30.781809  112536 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0420 00:49:30.781815  112536 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0420 00:49:30.781825  112536 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0420 00:49:30.781832  112536 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0420 00:49:30.781842  112536 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0420 00:49:30.781851  112536 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0420 00:49:30.781860  112536 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0420 00:49:30.781868  112536 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0420 00:49:30.781873  112536 command_runner.go:130] > #   deprecated option "conmon".
	I0420 00:49:30.781880  112536 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0420 00:49:30.781885  112536 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0420 00:49:30.781894  112536 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0420 00:49:30.781899  112536 command_runner.go:130] > #   should be moved to the container's cgroup
	I0420 00:49:30.781905  112536 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0420 00:49:30.781912  112536 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0420 00:49:30.781919  112536 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0420 00:49:30.781926  112536 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0420 00:49:30.781929  112536 command_runner.go:130] > #
	I0420 00:49:30.781934  112536 command_runner.go:130] > # Using the seccomp notifier feature:
	I0420 00:49:30.781942  112536 command_runner.go:130] > #
	I0420 00:49:30.781950  112536 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0420 00:49:30.781957  112536 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0420 00:49:30.781962  112536 command_runner.go:130] > #
	I0420 00:49:30.781968  112536 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0420 00:49:30.781976  112536 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0420 00:49:30.781979  112536 command_runner.go:130] > #
	I0420 00:49:30.781985  112536 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0420 00:49:30.781990  112536 command_runner.go:130] > # feature.
	I0420 00:49:30.781994  112536 command_runner.go:130] > #
	I0420 00:49:30.781999  112536 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0420 00:49:30.782008  112536 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0420 00:49:30.782013  112536 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0420 00:49:30.782021  112536 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0420 00:49:30.782028  112536 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0420 00:49:30.782034  112536 command_runner.go:130] > #
	I0420 00:49:30.782042  112536 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0420 00:49:30.782050  112536 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0420 00:49:30.782053  112536 command_runner.go:130] > #
	I0420 00:49:30.782059  112536 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0420 00:49:30.782068  112536 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0420 00:49:30.782071  112536 command_runner.go:130] > #
	I0420 00:49:30.782076  112536 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0420 00:49:30.782083  112536 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0420 00:49:30.782087  112536 command_runner.go:130] > # limitation.
	I0420 00:49:30.782093  112536 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0420 00:49:30.782099  112536 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0420 00:49:30.782104  112536 command_runner.go:130] > runtime_type = "oci"
	I0420 00:49:30.782111  112536 command_runner.go:130] > runtime_root = "/run/runc"
	I0420 00:49:30.782114  112536 command_runner.go:130] > runtime_config_path = ""
	I0420 00:49:30.782119  112536 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0420 00:49:30.782126  112536 command_runner.go:130] > monitor_cgroup = "pod"
	I0420 00:49:30.782130  112536 command_runner.go:130] > monitor_exec_cgroup = ""
	I0420 00:49:30.782135  112536 command_runner.go:130] > monitor_env = [
	I0420 00:49:30.782141  112536 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0420 00:49:30.782147  112536 command_runner.go:130] > ]
	I0420 00:49:30.782152  112536 command_runner.go:130] > privileged_without_host_devices = false
	I0420 00:49:30.782158  112536 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0420 00:49:30.782166  112536 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0420 00:49:30.782172  112536 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0420 00:49:30.782181  112536 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0420 00:49:30.782191  112536 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0420 00:49:30.782199  112536 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0420 00:49:30.782208  112536 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0420 00:49:30.782218  112536 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0420 00:49:30.782223  112536 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0420 00:49:30.782231  112536 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0420 00:49:30.782237  112536 command_runner.go:130] > # Example:
	I0420 00:49:30.782241  112536 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0420 00:49:30.782249  112536 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0420 00:49:30.782254  112536 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0420 00:49:30.782260  112536 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0420 00:49:30.782264  112536 command_runner.go:130] > # cpuset = 0
	I0420 00:49:30.782268  112536 command_runner.go:130] > # cpushares = "0-1"
	I0420 00:49:30.782271  112536 command_runner.go:130] > # Where:
	I0420 00:49:30.782276  112536 command_runner.go:130] > # The workload name is workload-type.
	I0420 00:49:30.782282  112536 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0420 00:49:30.782290  112536 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0420 00:49:30.782295  112536 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0420 00:49:30.782305  112536 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0420 00:49:30.782311  112536 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0420 00:49:30.782317  112536 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0420 00:49:30.782323  112536 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0420 00:49:30.782330  112536 command_runner.go:130] > # Default value is set to true
	I0420 00:49:30.782334  112536 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0420 00:49:30.782342  112536 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0420 00:49:30.782346  112536 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0420 00:49:30.782353  112536 command_runner.go:130] > # Default value is set to 'false'
	I0420 00:49:30.782358  112536 command_runner.go:130] > # disable_hostport_mapping = false
	I0420 00:49:30.782367  112536 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0420 00:49:30.782370  112536 command_runner.go:130] > #
	I0420 00:49:30.782379  112536 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0420 00:49:30.782384  112536 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0420 00:49:30.782390  112536 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0420 00:49:30.782397  112536 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0420 00:49:30.782401  112536 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0420 00:49:30.782407  112536 command_runner.go:130] > [crio.image]
	I0420 00:49:30.782412  112536 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0420 00:49:30.782416  112536 command_runner.go:130] > # default_transport = "docker://"
	I0420 00:49:30.782422  112536 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0420 00:49:30.782427  112536 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0420 00:49:30.782431  112536 command_runner.go:130] > # global_auth_file = ""
	I0420 00:49:30.782435  112536 command_runner.go:130] > # The image used to instantiate infra containers.
	I0420 00:49:30.782440  112536 command_runner.go:130] > # This option supports live configuration reload.
	I0420 00:49:30.782444  112536 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0420 00:49:30.782452  112536 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0420 00:49:30.782458  112536 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0420 00:49:30.782463  112536 command_runner.go:130] > # This option supports live configuration reload.
	I0420 00:49:30.782469  112536 command_runner.go:130] > # pause_image_auth_file = ""
	I0420 00:49:30.782476  112536 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0420 00:49:30.782484  112536 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0420 00:49:30.782493  112536 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0420 00:49:30.782506  112536 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0420 00:49:30.782512  112536 command_runner.go:130] > # pause_command = "/pause"
	I0420 00:49:30.782521  112536 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0420 00:49:30.782533  112536 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0420 00:49:30.782545  112536 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0420 00:49:30.782560  112536 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0420 00:49:30.782573  112536 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0420 00:49:30.782584  112536 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0420 00:49:30.782593  112536 command_runner.go:130] > # pinned_images = [
	I0420 00:49:30.782599  112536 command_runner.go:130] > # ]
	I0420 00:49:30.782610  112536 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0420 00:49:30.782618  112536 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0420 00:49:30.782624  112536 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0420 00:49:30.782637  112536 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0420 00:49:30.782645  112536 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0420 00:49:30.782649  112536 command_runner.go:130] > # signature_policy = ""
	I0420 00:49:30.782655  112536 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0420 00:49:30.782662  112536 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0420 00:49:30.782670  112536 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0420 00:49:30.782676  112536 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0420 00:49:30.782688  112536 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0420 00:49:30.782698  112536 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0420 00:49:30.782704  112536 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0420 00:49:30.782713  112536 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0420 00:49:30.782717  112536 command_runner.go:130] > # changing them here.
	I0420 00:49:30.782724  112536 command_runner.go:130] > # insecure_registries = [
	I0420 00:49:30.782727  112536 command_runner.go:130] > # ]
	I0420 00:49:30.782735  112536 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0420 00:49:30.782740  112536 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0420 00:49:30.782747  112536 command_runner.go:130] > # image_volumes = "mkdir"
	I0420 00:49:30.782755  112536 command_runner.go:130] > # Temporary directory to use for storing big files
	I0420 00:49:30.782759  112536 command_runner.go:130] > # big_files_temporary_dir = ""
	I0420 00:49:30.782766  112536 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0420 00:49:30.782772  112536 command_runner.go:130] > # CNI plugins.
	I0420 00:49:30.782776  112536 command_runner.go:130] > [crio.network]
	I0420 00:49:30.782784  112536 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0420 00:49:30.782790  112536 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0420 00:49:30.782796  112536 command_runner.go:130] > # cni_default_network = ""
	I0420 00:49:30.782802  112536 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0420 00:49:30.782808  112536 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0420 00:49:30.782813  112536 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0420 00:49:30.782819  112536 command_runner.go:130] > # plugin_dirs = [
	I0420 00:49:30.782823  112536 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0420 00:49:30.782826  112536 command_runner.go:130] > # ]
	I0420 00:49:30.782831  112536 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0420 00:49:30.782835  112536 command_runner.go:130] > [crio.metrics]
	I0420 00:49:30.782840  112536 command_runner.go:130] > # Globally enable or disable metrics support.
	I0420 00:49:30.782846  112536 command_runner.go:130] > enable_metrics = true
	I0420 00:49:30.782851  112536 command_runner.go:130] > # Specify enabled metrics collectors.
	I0420 00:49:30.782858  112536 command_runner.go:130] > # Per default all metrics are enabled.
	I0420 00:49:30.782864  112536 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0420 00:49:30.782873  112536 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0420 00:49:30.782878  112536 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0420 00:49:30.782884  112536 command_runner.go:130] > # metrics_collectors = [
	I0420 00:49:30.782888  112536 command_runner.go:130] > # 	"operations",
	I0420 00:49:30.782892  112536 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0420 00:49:30.782899  112536 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0420 00:49:30.782903  112536 command_runner.go:130] > # 	"operations_errors",
	I0420 00:49:30.782908  112536 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0420 00:49:30.782913  112536 command_runner.go:130] > # 	"image_pulls_by_name",
	I0420 00:49:30.782918  112536 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0420 00:49:30.782924  112536 command_runner.go:130] > # 	"image_pulls_failures",
	I0420 00:49:30.782930  112536 command_runner.go:130] > # 	"image_pulls_successes",
	I0420 00:49:30.782937  112536 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0420 00:49:30.782941  112536 command_runner.go:130] > # 	"image_layer_reuse",
	I0420 00:49:30.782946  112536 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0420 00:49:30.782953  112536 command_runner.go:130] > # 	"containers_oom_total",
	I0420 00:49:30.782956  112536 command_runner.go:130] > # 	"containers_oom",
	I0420 00:49:30.782962  112536 command_runner.go:130] > # 	"processes_defunct",
	I0420 00:49:30.782966  112536 command_runner.go:130] > # 	"operations_total",
	I0420 00:49:30.782971  112536 command_runner.go:130] > # 	"operations_latency_seconds",
	I0420 00:49:30.782978  112536 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0420 00:49:30.782982  112536 command_runner.go:130] > # 	"operations_errors_total",
	I0420 00:49:30.782986  112536 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0420 00:49:30.783002  112536 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0420 00:49:30.783011  112536 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0420 00:49:30.783016  112536 command_runner.go:130] > # 	"image_pulls_success_total",
	I0420 00:49:30.783022  112536 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0420 00:49:30.783027  112536 command_runner.go:130] > # 	"containers_oom_count_total",
	I0420 00:49:30.783033  112536 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0420 00:49:30.783038  112536 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0420 00:49:30.783042  112536 command_runner.go:130] > # ]
	I0420 00:49:30.783047  112536 command_runner.go:130] > # The port on which the metrics server will listen.
	I0420 00:49:30.783053  112536 command_runner.go:130] > # metrics_port = 9090
	I0420 00:49:30.783058  112536 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0420 00:49:30.783064  112536 command_runner.go:130] > # metrics_socket = ""
	I0420 00:49:30.783069  112536 command_runner.go:130] > # The certificate for the secure metrics server.
	I0420 00:49:30.783075  112536 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0420 00:49:30.783083  112536 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0420 00:49:30.783088  112536 command_runner.go:130] > # certificate on any modification event.
	I0420 00:49:30.783094  112536 command_runner.go:130] > # metrics_cert = ""
	I0420 00:49:30.783099  112536 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0420 00:49:30.783106  112536 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0420 00:49:30.783110  112536 command_runner.go:130] > # metrics_key = ""
	I0420 00:49:30.783115  112536 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0420 00:49:30.783119  112536 command_runner.go:130] > [crio.tracing]
	I0420 00:49:30.783127  112536 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0420 00:49:30.783131  112536 command_runner.go:130] > # enable_tracing = false
	I0420 00:49:30.783137  112536 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0420 00:49:30.783142  112536 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0420 00:49:30.783150  112536 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0420 00:49:30.783159  112536 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0420 00:49:30.783164  112536 command_runner.go:130] > # CRI-O NRI configuration.
	I0420 00:49:30.783170  112536 command_runner.go:130] > [crio.nri]
	I0420 00:49:30.783174  112536 command_runner.go:130] > # Globally enable or disable NRI.
	I0420 00:49:30.783178  112536 command_runner.go:130] > # enable_nri = false
	I0420 00:49:30.783183  112536 command_runner.go:130] > # NRI socket to listen on.
	I0420 00:49:30.783192  112536 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0420 00:49:30.783199  112536 command_runner.go:130] > # NRI plugin directory to use.
	I0420 00:49:30.783204  112536 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0420 00:49:30.783211  112536 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0420 00:49:30.783216  112536 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0420 00:49:30.783221  112536 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0420 00:49:30.783228  112536 command_runner.go:130] > # nri_disable_connections = false
	I0420 00:49:30.783233  112536 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0420 00:49:30.783237  112536 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0420 00:49:30.783244  112536 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0420 00:49:30.783249  112536 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0420 00:49:30.783257  112536 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0420 00:49:30.783261  112536 command_runner.go:130] > [crio.stats]
	I0420 00:49:30.783269  112536 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0420 00:49:30.783275  112536 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0420 00:49:30.783282  112536 command_runner.go:130] > # stats_collection_period = 0
	I0420 00:49:30.783401  112536 cni.go:84] Creating CNI manager for ""
	I0420 00:49:30.783414  112536 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0420 00:49:30.783425  112536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 00:49:30.783445  112536 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-059001 NodeName:multinode-059001 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 00:49:30.783642  112536 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-059001"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 00:49:30.783723  112536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 00:49:30.795058  112536 command_runner.go:130] > kubeadm
	I0420 00:49:30.795081  112536 command_runner.go:130] > kubectl
	I0420 00:49:30.795088  112536 command_runner.go:130] > kubelet
	I0420 00:49:30.795110  112536 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 00:49:30.795174  112536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 00:49:30.805330  112536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0420 00:49:30.823866  112536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 00:49:30.843271  112536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0420 00:49:30.862128  112536 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I0420 00:49:30.866579  112536 command_runner.go:130] > 192.168.39.200	control-plane.minikube.internal
	I0420 00:49:30.866657  112536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:49:31.011744  112536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:49:31.028284  112536 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001 for IP: 192.168.39.200
	I0420 00:49:31.028315  112536 certs.go:194] generating shared ca certs ...
	I0420 00:49:31.028337  112536 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:49:31.028526  112536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 00:49:31.028584  112536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 00:49:31.028597  112536 certs.go:256] generating profile certs ...
	I0420 00:49:31.028695  112536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/client.key
	I0420 00:49:31.028752  112536 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/apiserver.key.73861182
	I0420 00:49:31.028805  112536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/proxy-client.key
	I0420 00:49:31.028818  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0420 00:49:31.028833  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0420 00:49:31.028845  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0420 00:49:31.028856  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0420 00:49:31.028868  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0420 00:49:31.028881  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0420 00:49:31.028893  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0420 00:49:31.028905  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0420 00:49:31.028957  112536 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 00:49:31.028989  112536 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 00:49:31.028999  112536 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 00:49:31.029019  112536 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 00:49:31.029042  112536 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 00:49:31.029061  112536 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 00:49:31.029097  112536 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 00:49:31.029131  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem -> /usr/share/ca-certificates/83742.pem
	I0420 00:49:31.029145  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> /usr/share/ca-certificates/837422.pem
	I0420 00:49:31.029156  112536 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:49:31.029863  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 00:49:31.057216  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 00:49:31.082773  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 00:49:31.108867  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 00:49:31.134835  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0420 00:49:31.160520  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 00:49:31.186784  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 00:49:31.213175  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/multinode-059001/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0420 00:49:31.239183  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 00:49:31.265028  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 00:49:31.291295  112536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 00:49:31.316318  112536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 00:49:31.334828  112536 ssh_runner.go:195] Run: openssl version
	I0420 00:49:31.341270  112536 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0420 00:49:31.341372  112536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 00:49:31.353571  112536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 00:49:31.358437  112536 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 00:49:31.358576  112536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 00:49:31.358628  112536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 00:49:31.364707  112536 command_runner.go:130] > 3ec20f2e
	I0420 00:49:31.364775  112536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 00:49:31.375556  112536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 00:49:31.387697  112536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:49:31.393386  112536 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:49:31.393411  112536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:49:31.393448  112536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:49:31.399416  112536 command_runner.go:130] > b5213941
	I0420 00:49:31.399512  112536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 00:49:31.410186  112536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 00:49:31.422373  112536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 00:49:31.427363  112536 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 00:49:31.427390  112536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 00:49:31.427426  112536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 00:49:31.433585  112536 command_runner.go:130] > 51391683
	I0420 00:49:31.433641  112536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 00:49:31.444195  112536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 00:49:31.449048  112536 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 00:49:31.449066  112536 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0420 00:49:31.449072  112536 command_runner.go:130] > Device: 253,1	Inode: 2104342     Links: 1
	I0420 00:49:31.449078  112536 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0420 00:49:31.449083  112536 command_runner.go:130] > Access: 2024-04-20 00:43:14.037961774 +0000
	I0420 00:49:31.449088  112536 command_runner.go:130] > Modify: 2024-04-20 00:43:14.037961774 +0000
	I0420 00:49:31.449093  112536 command_runner.go:130] > Change: 2024-04-20 00:43:14.037961774 +0000
	I0420 00:49:31.449100  112536 command_runner.go:130] >  Birth: 2024-04-20 00:43:14.037961774 +0000
	I0420 00:49:31.449176  112536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 00:49:31.455208  112536 command_runner.go:130] > Certificate will not expire
	I0420 00:49:31.455262  112536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 00:49:31.461056  112536 command_runner.go:130] > Certificate will not expire
	I0420 00:49:31.461243  112536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 00:49:31.467176  112536 command_runner.go:130] > Certificate will not expire
	I0420 00:49:31.467238  112536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 00:49:31.473891  112536 command_runner.go:130] > Certificate will not expire
	I0420 00:49:31.473952  112536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 00:49:31.479885  112536 command_runner.go:130] > Certificate will not expire
	I0420 00:49:31.479932  112536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 00:49:31.485689  112536 command_runner.go:130] > Certificate will not expire
	I0420 00:49:31.485988  112536 kubeadm.go:391] StartCluster: {Name:multinode-059001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-059001 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.108 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:f
alse kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:49:31.486106  112536 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 00:49:31.486143  112536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 00:49:31.525870  112536 command_runner.go:130] > f3eb2c3e1d64f749c4a33a890bf03a2469f9d04d15bf36b4abfcedbf37c11d87
	I0420 00:49:31.525894  112536 command_runner.go:130] > 965cc419b10f36404b1558be87e8f817b003ad469e3ce84651b28b55bc60b969
	I0420 00:49:31.525900  112536 command_runner.go:130] > 0cdf1f27fc4a775bdb1bd07aca352e8375ad5df4a0ac4f9844f6731ab60ba0fa
	I0420 00:49:31.525907  112536 command_runner.go:130] > 278b79bc7d7b5493e7659e02415efc2edf732ce4e95961ab096cf68068cb2c95
	I0420 00:49:31.525912  112536 command_runner.go:130] > e6b41406ce7bb57c290c09411bc7850ed947848da5b369d197c7de10f99cc175
	I0420 00:49:31.525918  112536 command_runner.go:130] > 339a729cde4f15511548279f70978ed3269d7198f64ba32a003790f3bb2bd1eb
	I0420 00:49:31.525923  112536 command_runner.go:130] > 81d365f1385c877c7c0e983fc2fcdafa619322c001fce172d3d29450e5d3d53c
	I0420 00:49:31.525932  112536 command_runner.go:130] > b8e65c0c15cef8d42afec5611dd88b24133e9f162cd54535518c9f25729dcfc7
	I0420 00:49:31.527410  112536 cri.go:89] found id: "f3eb2c3e1d64f749c4a33a890bf03a2469f9d04d15bf36b4abfcedbf37c11d87"
	I0420 00:49:31.527431  112536 cri.go:89] found id: "965cc419b10f36404b1558be87e8f817b003ad469e3ce84651b28b55bc60b969"
	I0420 00:49:31.527437  112536 cri.go:89] found id: "0cdf1f27fc4a775bdb1bd07aca352e8375ad5df4a0ac4f9844f6731ab60ba0fa"
	I0420 00:49:31.527442  112536 cri.go:89] found id: "278b79bc7d7b5493e7659e02415efc2edf732ce4e95961ab096cf68068cb2c95"
	I0420 00:49:31.527445  112536 cri.go:89] found id: "e6b41406ce7bb57c290c09411bc7850ed947848da5b369d197c7de10f99cc175"
	I0420 00:49:31.527450  112536 cri.go:89] found id: "339a729cde4f15511548279f70978ed3269d7198f64ba32a003790f3bb2bd1eb"
	I0420 00:49:31.527454  112536 cri.go:89] found id: "81d365f1385c877c7c0e983fc2fcdafa619322c001fce172d3d29450e5d3d53c"
	I0420 00:49:31.527458  112536 cri.go:89] found id: "b8e65c0c15cef8d42afec5611dd88b24133e9f162cd54535518c9f25729dcfc7"
	I0420 00:49:31.527461  112536 cri.go:89] found id: ""
	I0420 00:49:31.527512  112536 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.400866991Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713574402400844240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=174e8468-d25d-42d7-82c4-19b72769e38f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.401665113Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8cce8ca7-7266-43a0-8de3-44c03f2accef name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.401714685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8cce8ca7-7266-43a0-8de3-44c03f2accef name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.402056305Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c373d9ab9916e6c97f3c326b821922d2b75e734231d5e691275538ed6dd352dd,PodSandboxId:c017c6d333758c4e1b1e4effddfc6c56eae50f96da8ed0078147005856385edc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713574212462664810,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xlthm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cecb2998-715e-4d88-bea0-1cbece396619,},Annotations:map[string]string{io.kubernetes.container.hash: 96639d9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75d3e908c6a1d767bee0f970657c6dd2ec7c785094ee7e8174e8b6bead9eb35,PodSandboxId:2788e89cb28c46293860818725963f909b316cabd9cc84fcb1e8a22181947892,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713574179072929436,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nrhgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc879522-987c-4e38-bdb1-949a9d934334,},Annotations:map[string]string{io.kubernetes.container.hash: d795ed4f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e5bf4a09cd8321581c4874326d57b43cb6b128503e0a7689c8f3f439547696,PodSandboxId:fe978d5a63406946295962456854669ea9769a400eac874e7aaa44c8491d0804,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713574178796341941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78rrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0e9030-0d37-40b8-bb06-621b526ca289,},Annotations:map[string]string{io.kubernetes.container.hash: b3e0e2af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a8a21632a3f8138cf9baf78d98e99ffb89ba79c6dcb2fb4bf331d33e55ecc5,PodSandboxId:aaa286949dad8993f3974de3fea3c741d3f17f781c0c7bb8ddf4f733f8f3fae2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713574178753409689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139b40e9-a1ec-4035-88a9-e382b2ee6293,},An
notations:map[string]string{io.kubernetes.container.hash: d83b8429,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de6f3bfc27ff52e665aeea65f29380aacd63f3616dad1947aba138059bf66af,PodSandboxId:e29602718b91850c80b2e9c44f4aa38a3e02888082494f02867b6ea7c12e88b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713574178671788316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blctg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64ab7435-a9ee-432d-8b87-58c3e4c7a147,},Annotations:map[string]string{io.ku
bernetes.container.hash: e4e3a721,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06122a5f65bc233da5812210b7739fd4d498d9c11f7b786cff2e2574315b535b,PodSandboxId:4e9538c73ad90c10c58f61635c331ccf183de9a61d4c1023c4f3b66f358abe49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713574173902116355,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2ebac18edd173db2b08c1b57ae6104,},Annotations:map[string
]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec26c00543f48944316f9c22d404b319e5291332349b3fde5f30beaf6a17766,PodSandboxId:3c997b5238fd83589c33364348861f0bd2b48e90ef904807e757d8b73db91e00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713574173886215827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccbd724481c76f6013288282b8986ae2,},Annotations:map[string]string{io.kubernetes.container.hash: 70aa52
2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f78f6b91aa74df6a71fda29b2aa790b8049ec4615015da4cbff4961fca992a,PodSandboxId:c348aeb6b3c82f332d3880270a5c307ee7ec368f643e09a47f5ac76c1ea47d7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713574173864354334,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c048501d08bb4d4c228b17991d7668e,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8dc5eb92c25a81376e5bc22d48ea950cfab5d2a9e85631f1f9dce9014b8ec2,PodSandboxId:0982ad8753fce485895d750299c436fcb0cc0c564edfbf31dfea886ea7b74cdf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713574173792922346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b672f9613f8c5eb6391347653d552133,},Annotations:map[string]string{io.kubernetes.container.hash: 84e0b7c5,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fe5ec0ecb57f0e82408ca966347ca87580fff9c19829f6a7d70f3f080cf9f3,PodSandboxId:6da2ff71dd49dfaa998acb831908cac2140d89c2b3c838c042b1bc35aae1c2dd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713573870179457901,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xlthm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cecb2998-715e-4d88-bea0-1cbece396619,},Annotations:map[string]string{io.kubernetes.container.hash: 96639d9f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3eb2c3e1d64f749c4a33a890bf03a2469f9d04d15bf36b4abfcedbf37c11d87,PodSandboxId:596fbfbd40b0b4c23f81de04a4c41efb03af7eb42307c2821c732e927c5370c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713573819989310919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78rrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0e9030-0d37-40b8-bb06-621b526ca289,},Annotations:map[string]string{io.kubernetes.container.hash: b3e0e2af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:965cc419b10f36404b1558be87e8f817b003ad469e3ce84651b28b55bc60b969,PodSandboxId:355895ccdd2c5766937723d682c03e151b277ec709ceaaef82498a6532e9423c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713573819963026060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 139b40e9-a1ec-4035-88a9-e382b2ee6293,},Annotations:map[string]string{io.kubernetes.container.hash: d83b8429,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cdf1f27fc4a775bdb1bd07aca352e8375ad5df4a0ac4f9844f6731ab60ba0fa,PodSandboxId:879a6cbe15b42171d5d10281b94041febdf0c636bd4b2e49cc0bdb5ffd056c95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713573818339220687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blctg,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 64ab7435-a9ee-432d-8b87-58c3e4c7a147,},Annotations:map[string]string{io.kubernetes.container.hash: e4e3a721,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278b79bc7d7b5493e7659e02415efc2edf732ce4e95961ab096cf68068cb2c95,PodSandboxId:7a3bea706d980121fcec93816d4e91c120df59efe3fb669dbe21ebe22b0113bf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713573818167001593,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nrhgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc879522-987c-4e38-bdb1
-949a9d934334,},Annotations:map[string]string{io.kubernetes.container.hash: d795ed4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b41406ce7bb57c290c09411bc7850ed947848da5b369d197c7de10f99cc175,PodSandboxId:0713688d466e52a2c687fde5eab6b6728c28928f688338f1d2291bb4ac90b30b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713573797702898456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccbd724481c76f6013288282b8986ae2,},Annotations:map[string]string
{io.kubernetes.container.hash: 70aa522c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d365f1385c877c7c0e983fc2fcdafa619322c001fce172d3d29450e5d3d53c,PodSandboxId:209b4833115b9ef8224439c5a8fec12e9155896992835ec5286f6d2a8e8a15f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713573797651921097,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2ebac18edd173db2b08c1b57ae6104,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339a729cde4f15511548279f70978ed3269d7198f64ba32a003790f3bb2bd1eb,PodSandboxId:a8c2ccf6a610f7f06ed346ee7c807a16889d78c2d214c682cf8b4de58eab1bb9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713573797663346072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b672f9613f8c5eb6391347653d552133,},Annotations:map[string]string{io
.kubernetes.container.hash: 84e0b7c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8e65c0c15cef8d42afec5611dd88b24133e9f162cd54535518c9f25729dcfc7,PodSandboxId:9fe3a6907a5ae3c1930afaba6a04f92f4977d630ef2fce072aff30eacb46eaa6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713573797591317956,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c048501d08bb4d4c228b17991d7668e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8cce8ca7-7266-43a0-8de3-44c03f2accef name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.447944739Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f08ea14-dd0e-4d30-89b0-4da2d9b7657a name=/runtime.v1.RuntimeService/Version
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.448020351Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f08ea14-dd0e-4d30-89b0-4da2d9b7657a name=/runtime.v1.RuntimeService/Version
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.449144544Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cea5ea2e-5062-4234-9526-394b6c62ef16 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.449662757Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713574402449636224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cea5ea2e-5062-4234-9526-394b6c62ef16 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.450299029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e04fc605-9ad1-4720-9d93-9039f6bede28 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.450354095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e04fc605-9ad1-4720-9d93-9039f6bede28 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.450774505Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c373d9ab9916e6c97f3c326b821922d2b75e734231d5e691275538ed6dd352dd,PodSandboxId:c017c6d333758c4e1b1e4effddfc6c56eae50f96da8ed0078147005856385edc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713574212462664810,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xlthm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cecb2998-715e-4d88-bea0-1cbece396619,},Annotations:map[string]string{io.kubernetes.container.hash: 96639d9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75d3e908c6a1d767bee0f970657c6dd2ec7c785094ee7e8174e8b6bead9eb35,PodSandboxId:2788e89cb28c46293860818725963f909b316cabd9cc84fcb1e8a22181947892,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713574179072929436,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nrhgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc879522-987c-4e38-bdb1-949a9d934334,},Annotations:map[string]string{io.kubernetes.container.hash: d795ed4f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e5bf4a09cd8321581c4874326d57b43cb6b128503e0a7689c8f3f439547696,PodSandboxId:fe978d5a63406946295962456854669ea9769a400eac874e7aaa44c8491d0804,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713574178796341941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78rrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0e9030-0d37-40b8-bb06-621b526ca289,},Annotations:map[string]string{io.kubernetes.container.hash: b3e0e2af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a8a21632a3f8138cf9baf78d98e99ffb89ba79c6dcb2fb4bf331d33e55ecc5,PodSandboxId:aaa286949dad8993f3974de3fea3c741d3f17f781c0c7bb8ddf4f733f8f3fae2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713574178753409689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139b40e9-a1ec-4035-88a9-e382b2ee6293,},An
notations:map[string]string{io.kubernetes.container.hash: d83b8429,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de6f3bfc27ff52e665aeea65f29380aacd63f3616dad1947aba138059bf66af,PodSandboxId:e29602718b91850c80b2e9c44f4aa38a3e02888082494f02867b6ea7c12e88b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713574178671788316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blctg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64ab7435-a9ee-432d-8b87-58c3e4c7a147,},Annotations:map[string]string{io.ku
bernetes.container.hash: e4e3a721,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06122a5f65bc233da5812210b7739fd4d498d9c11f7b786cff2e2574315b535b,PodSandboxId:4e9538c73ad90c10c58f61635c331ccf183de9a61d4c1023c4f3b66f358abe49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713574173902116355,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2ebac18edd173db2b08c1b57ae6104,},Annotations:map[string
]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec26c00543f48944316f9c22d404b319e5291332349b3fde5f30beaf6a17766,PodSandboxId:3c997b5238fd83589c33364348861f0bd2b48e90ef904807e757d8b73db91e00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713574173886215827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccbd724481c76f6013288282b8986ae2,},Annotations:map[string]string{io.kubernetes.container.hash: 70aa52
2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f78f6b91aa74df6a71fda29b2aa790b8049ec4615015da4cbff4961fca992a,PodSandboxId:c348aeb6b3c82f332d3880270a5c307ee7ec368f643e09a47f5ac76c1ea47d7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713574173864354334,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c048501d08bb4d4c228b17991d7668e,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8dc5eb92c25a81376e5bc22d48ea950cfab5d2a9e85631f1f9dce9014b8ec2,PodSandboxId:0982ad8753fce485895d750299c436fcb0cc0c564edfbf31dfea886ea7b74cdf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713574173792922346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b672f9613f8c5eb6391347653d552133,},Annotations:map[string]string{io.kubernetes.container.hash: 84e0b7c5,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fe5ec0ecb57f0e82408ca966347ca87580fff9c19829f6a7d70f3f080cf9f3,PodSandboxId:6da2ff71dd49dfaa998acb831908cac2140d89c2b3c838c042b1bc35aae1c2dd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713573870179457901,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xlthm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cecb2998-715e-4d88-bea0-1cbece396619,},Annotations:map[string]string{io.kubernetes.container.hash: 96639d9f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3eb2c3e1d64f749c4a33a890bf03a2469f9d04d15bf36b4abfcedbf37c11d87,PodSandboxId:596fbfbd40b0b4c23f81de04a4c41efb03af7eb42307c2821c732e927c5370c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713573819989310919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78rrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0e9030-0d37-40b8-bb06-621b526ca289,},Annotations:map[string]string{io.kubernetes.container.hash: b3e0e2af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:965cc419b10f36404b1558be87e8f817b003ad469e3ce84651b28b55bc60b969,PodSandboxId:355895ccdd2c5766937723d682c03e151b277ec709ceaaef82498a6532e9423c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713573819963026060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 139b40e9-a1ec-4035-88a9-e382b2ee6293,},Annotations:map[string]string{io.kubernetes.container.hash: d83b8429,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cdf1f27fc4a775bdb1bd07aca352e8375ad5df4a0ac4f9844f6731ab60ba0fa,PodSandboxId:879a6cbe15b42171d5d10281b94041febdf0c636bd4b2e49cc0bdb5ffd056c95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713573818339220687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blctg,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 64ab7435-a9ee-432d-8b87-58c3e4c7a147,},Annotations:map[string]string{io.kubernetes.container.hash: e4e3a721,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278b79bc7d7b5493e7659e02415efc2edf732ce4e95961ab096cf68068cb2c95,PodSandboxId:7a3bea706d980121fcec93816d4e91c120df59efe3fb669dbe21ebe22b0113bf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713573818167001593,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nrhgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc879522-987c-4e38-bdb1
-949a9d934334,},Annotations:map[string]string{io.kubernetes.container.hash: d795ed4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b41406ce7bb57c290c09411bc7850ed947848da5b369d197c7de10f99cc175,PodSandboxId:0713688d466e52a2c687fde5eab6b6728c28928f688338f1d2291bb4ac90b30b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713573797702898456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccbd724481c76f6013288282b8986ae2,},Annotations:map[string]string
{io.kubernetes.container.hash: 70aa522c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d365f1385c877c7c0e983fc2fcdafa619322c001fce172d3d29450e5d3d53c,PodSandboxId:209b4833115b9ef8224439c5a8fec12e9155896992835ec5286f6d2a8e8a15f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713573797651921097,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2ebac18edd173db2b08c1b57ae6104,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339a729cde4f15511548279f70978ed3269d7198f64ba32a003790f3bb2bd1eb,PodSandboxId:a8c2ccf6a610f7f06ed346ee7c807a16889d78c2d214c682cf8b4de58eab1bb9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713573797663346072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b672f9613f8c5eb6391347653d552133,},Annotations:map[string]string{io
.kubernetes.container.hash: 84e0b7c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8e65c0c15cef8d42afec5611dd88b24133e9f162cd54535518c9f25729dcfc7,PodSandboxId:9fe3a6907a5ae3c1930afaba6a04f92f4977d630ef2fce072aff30eacb46eaa6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713573797591317956,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c048501d08bb4d4c228b17991d7668e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e04fc605-9ad1-4720-9d93-9039f6bede28 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.505198004Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9922a1b9-e9de-4091-92ed-4a6835d23145 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.505271772Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9922a1b9-e9de-4091-92ed-4a6835d23145 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.506323816Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8af36ccd-834a-417b-8d83-3aae7181c96a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.506969794Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713574402506943555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8af36ccd-834a-417b-8d83-3aae7181c96a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.507844222Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf7a226e-0373-482c-bc4f-85a26b619100 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.507899148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf7a226e-0373-482c-bc4f-85a26b619100 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.508767633Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c373d9ab9916e6c97f3c326b821922d2b75e734231d5e691275538ed6dd352dd,PodSandboxId:c017c6d333758c4e1b1e4effddfc6c56eae50f96da8ed0078147005856385edc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713574212462664810,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xlthm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cecb2998-715e-4d88-bea0-1cbece396619,},Annotations:map[string]string{io.kubernetes.container.hash: 96639d9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75d3e908c6a1d767bee0f970657c6dd2ec7c785094ee7e8174e8b6bead9eb35,PodSandboxId:2788e89cb28c46293860818725963f909b316cabd9cc84fcb1e8a22181947892,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713574179072929436,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nrhgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc879522-987c-4e38-bdb1-949a9d934334,},Annotations:map[string]string{io.kubernetes.container.hash: d795ed4f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e5bf4a09cd8321581c4874326d57b43cb6b128503e0a7689c8f3f439547696,PodSandboxId:fe978d5a63406946295962456854669ea9769a400eac874e7aaa44c8491d0804,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713574178796341941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78rrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0e9030-0d37-40b8-bb06-621b526ca289,},Annotations:map[string]string{io.kubernetes.container.hash: b3e0e2af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a8a21632a3f8138cf9baf78d98e99ffb89ba79c6dcb2fb4bf331d33e55ecc5,PodSandboxId:aaa286949dad8993f3974de3fea3c741d3f17f781c0c7bb8ddf4f733f8f3fae2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713574178753409689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139b40e9-a1ec-4035-88a9-e382b2ee6293,},An
notations:map[string]string{io.kubernetes.container.hash: d83b8429,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de6f3bfc27ff52e665aeea65f29380aacd63f3616dad1947aba138059bf66af,PodSandboxId:e29602718b91850c80b2e9c44f4aa38a3e02888082494f02867b6ea7c12e88b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713574178671788316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blctg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64ab7435-a9ee-432d-8b87-58c3e4c7a147,},Annotations:map[string]string{io.ku
bernetes.container.hash: e4e3a721,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06122a5f65bc233da5812210b7739fd4d498d9c11f7b786cff2e2574315b535b,PodSandboxId:4e9538c73ad90c10c58f61635c331ccf183de9a61d4c1023c4f3b66f358abe49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713574173902116355,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2ebac18edd173db2b08c1b57ae6104,},Annotations:map[string
]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec26c00543f48944316f9c22d404b319e5291332349b3fde5f30beaf6a17766,PodSandboxId:3c997b5238fd83589c33364348861f0bd2b48e90ef904807e757d8b73db91e00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713574173886215827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccbd724481c76f6013288282b8986ae2,},Annotations:map[string]string{io.kubernetes.container.hash: 70aa52
2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f78f6b91aa74df6a71fda29b2aa790b8049ec4615015da4cbff4961fca992a,PodSandboxId:c348aeb6b3c82f332d3880270a5c307ee7ec368f643e09a47f5ac76c1ea47d7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713574173864354334,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c048501d08bb4d4c228b17991d7668e,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8dc5eb92c25a81376e5bc22d48ea950cfab5d2a9e85631f1f9dce9014b8ec2,PodSandboxId:0982ad8753fce485895d750299c436fcb0cc0c564edfbf31dfea886ea7b74cdf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713574173792922346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b672f9613f8c5eb6391347653d552133,},Annotations:map[string]string{io.kubernetes.container.hash: 84e0b7c5,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fe5ec0ecb57f0e82408ca966347ca87580fff9c19829f6a7d70f3f080cf9f3,PodSandboxId:6da2ff71dd49dfaa998acb831908cac2140d89c2b3c838c042b1bc35aae1c2dd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713573870179457901,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xlthm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cecb2998-715e-4d88-bea0-1cbece396619,},Annotations:map[string]string{io.kubernetes.container.hash: 96639d9f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3eb2c3e1d64f749c4a33a890bf03a2469f9d04d15bf36b4abfcedbf37c11d87,PodSandboxId:596fbfbd40b0b4c23f81de04a4c41efb03af7eb42307c2821c732e927c5370c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713573819989310919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78rrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0e9030-0d37-40b8-bb06-621b526ca289,},Annotations:map[string]string{io.kubernetes.container.hash: b3e0e2af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:965cc419b10f36404b1558be87e8f817b003ad469e3ce84651b28b55bc60b969,PodSandboxId:355895ccdd2c5766937723d682c03e151b277ec709ceaaef82498a6532e9423c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713573819963026060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 139b40e9-a1ec-4035-88a9-e382b2ee6293,},Annotations:map[string]string{io.kubernetes.container.hash: d83b8429,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cdf1f27fc4a775bdb1bd07aca352e8375ad5df4a0ac4f9844f6731ab60ba0fa,PodSandboxId:879a6cbe15b42171d5d10281b94041febdf0c636bd4b2e49cc0bdb5ffd056c95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713573818339220687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blctg,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 64ab7435-a9ee-432d-8b87-58c3e4c7a147,},Annotations:map[string]string{io.kubernetes.container.hash: e4e3a721,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278b79bc7d7b5493e7659e02415efc2edf732ce4e95961ab096cf68068cb2c95,PodSandboxId:7a3bea706d980121fcec93816d4e91c120df59efe3fb669dbe21ebe22b0113bf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713573818167001593,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nrhgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc879522-987c-4e38-bdb1
-949a9d934334,},Annotations:map[string]string{io.kubernetes.container.hash: d795ed4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b41406ce7bb57c290c09411bc7850ed947848da5b369d197c7de10f99cc175,PodSandboxId:0713688d466e52a2c687fde5eab6b6728c28928f688338f1d2291bb4ac90b30b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713573797702898456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccbd724481c76f6013288282b8986ae2,},Annotations:map[string]string
{io.kubernetes.container.hash: 70aa522c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d365f1385c877c7c0e983fc2fcdafa619322c001fce172d3d29450e5d3d53c,PodSandboxId:209b4833115b9ef8224439c5a8fec12e9155896992835ec5286f6d2a8e8a15f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713573797651921097,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2ebac18edd173db2b08c1b57ae6104,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339a729cde4f15511548279f70978ed3269d7198f64ba32a003790f3bb2bd1eb,PodSandboxId:a8c2ccf6a610f7f06ed346ee7c807a16889d78c2d214c682cf8b4de58eab1bb9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713573797663346072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b672f9613f8c5eb6391347653d552133,},Annotations:map[string]string{io
.kubernetes.container.hash: 84e0b7c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8e65c0c15cef8d42afec5611dd88b24133e9f162cd54535518c9f25729dcfc7,PodSandboxId:9fe3a6907a5ae3c1930afaba6a04f92f4977d630ef2fce072aff30eacb46eaa6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713573797591317956,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c048501d08bb4d4c228b17991d7668e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf7a226e-0373-482c-bc4f-85a26b619100 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.563318699Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ffef473a-81a7-4f42-a5a9-dd14a8c96b64 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.566805996Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ffef473a-81a7-4f42-a5a9-dd14a8c96b64 name=/runtime.v1.RuntimeService/Version
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.568226225Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b4b8b2ba-07e3-4014-9d08-0a93cf9755a6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.568698024Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713574402568672501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b4b8b2ba-07e3-4014-9d08-0a93cf9755a6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.569703054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8f06898-0129-407f-9d7a-72c5072287c9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.569754076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8f06898-0129-407f-9d7a-72c5072287c9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 00:53:22 multinode-059001 crio[2847]: time="2024-04-20 00:53:22.570089176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c373d9ab9916e6c97f3c326b821922d2b75e734231d5e691275538ed6dd352dd,PodSandboxId:c017c6d333758c4e1b1e4effddfc6c56eae50f96da8ed0078147005856385edc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713574212462664810,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xlthm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cecb2998-715e-4d88-bea0-1cbece396619,},Annotations:map[string]string{io.kubernetes.container.hash: 96639d9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75d3e908c6a1d767bee0f970657c6dd2ec7c785094ee7e8174e8b6bead9eb35,PodSandboxId:2788e89cb28c46293860818725963f909b316cabd9cc84fcb1e8a22181947892,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713574179072929436,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nrhgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc879522-987c-4e38-bdb1-949a9d934334,},Annotations:map[string]string{io.kubernetes.container.hash: d795ed4f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e5bf4a09cd8321581c4874326d57b43cb6b128503e0a7689c8f3f439547696,PodSandboxId:fe978d5a63406946295962456854669ea9769a400eac874e7aaa44c8491d0804,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713574178796341941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78rrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0e9030-0d37-40b8-bb06-621b526ca289,},Annotations:map[string]string{io.kubernetes.container.hash: b3e0e2af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a8a21632a3f8138cf9baf78d98e99ffb89ba79c6dcb2fb4bf331d33e55ecc5,PodSandboxId:aaa286949dad8993f3974de3fea3c741d3f17f781c0c7bb8ddf4f733f8f3fae2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713574178753409689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139b40e9-a1ec-4035-88a9-e382b2ee6293,},An
notations:map[string]string{io.kubernetes.container.hash: d83b8429,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1de6f3bfc27ff52e665aeea65f29380aacd63f3616dad1947aba138059bf66af,PodSandboxId:e29602718b91850c80b2e9c44f4aa38a3e02888082494f02867b6ea7c12e88b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713574178671788316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blctg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64ab7435-a9ee-432d-8b87-58c3e4c7a147,},Annotations:map[string]string{io.ku
bernetes.container.hash: e4e3a721,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06122a5f65bc233da5812210b7739fd4d498d9c11f7b786cff2e2574315b535b,PodSandboxId:4e9538c73ad90c10c58f61635c331ccf183de9a61d4c1023c4f3b66f358abe49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713574173902116355,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2ebac18edd173db2b08c1b57ae6104,},Annotations:map[string
]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec26c00543f48944316f9c22d404b319e5291332349b3fde5f30beaf6a17766,PodSandboxId:3c997b5238fd83589c33364348861f0bd2b48e90ef904807e757d8b73db91e00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713574173886215827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccbd724481c76f6013288282b8986ae2,},Annotations:map[string]string{io.kubernetes.container.hash: 70aa52
2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f78f6b91aa74df6a71fda29b2aa790b8049ec4615015da4cbff4961fca992a,PodSandboxId:c348aeb6b3c82f332d3880270a5c307ee7ec368f643e09a47f5ac76c1ea47d7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713574173864354334,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c048501d08bb4d4c228b17991d7668e,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8dc5eb92c25a81376e5bc22d48ea950cfab5d2a9e85631f1f9dce9014b8ec2,PodSandboxId:0982ad8753fce485895d750299c436fcb0cc0c564edfbf31dfea886ea7b74cdf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713574173792922346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b672f9613f8c5eb6391347653d552133,},Annotations:map[string]string{io.kubernetes.container.hash: 84e0b7c5,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fe5ec0ecb57f0e82408ca966347ca87580fff9c19829f6a7d70f3f080cf9f3,PodSandboxId:6da2ff71dd49dfaa998acb831908cac2140d89c2b3c838c042b1bc35aae1c2dd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713573870179457901,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xlthm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cecb2998-715e-4d88-bea0-1cbece396619,},Annotations:map[string]string{io.kubernetes.container.hash: 96639d9f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3eb2c3e1d64f749c4a33a890bf03a2469f9d04d15bf36b4abfcedbf37c11d87,PodSandboxId:596fbfbd40b0b4c23f81de04a4c41efb03af7eb42307c2821c732e927c5370c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713573819989310919,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78rrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0e9030-0d37-40b8-bb06-621b526ca289,},Annotations:map[string]string{io.kubernetes.container.hash: b3e0e2af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:965cc419b10f36404b1558be87e8f817b003ad469e3ce84651b28b55bc60b969,PodSandboxId:355895ccdd2c5766937723d682c03e151b277ec709ceaaef82498a6532e9423c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713573819963026060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 139b40e9-a1ec-4035-88a9-e382b2ee6293,},Annotations:map[string]string{io.kubernetes.container.hash: d83b8429,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cdf1f27fc4a775bdb1bd07aca352e8375ad5df4a0ac4f9844f6731ab60ba0fa,PodSandboxId:879a6cbe15b42171d5d10281b94041febdf0c636bd4b2e49cc0bdb5ffd056c95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713573818339220687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blctg,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 64ab7435-a9ee-432d-8b87-58c3e4c7a147,},Annotations:map[string]string{io.kubernetes.container.hash: e4e3a721,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278b79bc7d7b5493e7659e02415efc2edf732ce4e95961ab096cf68068cb2c95,PodSandboxId:7a3bea706d980121fcec93816d4e91c120df59efe3fb669dbe21ebe22b0113bf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713573818167001593,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nrhgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc879522-987c-4e38-bdb1
-949a9d934334,},Annotations:map[string]string{io.kubernetes.container.hash: d795ed4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b41406ce7bb57c290c09411bc7850ed947848da5b369d197c7de10f99cc175,PodSandboxId:0713688d466e52a2c687fde5eab6b6728c28928f688338f1d2291bb4ac90b30b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713573797702898456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccbd724481c76f6013288282b8986ae2,},Annotations:map[string]string
{io.kubernetes.container.hash: 70aa522c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d365f1385c877c7c0e983fc2fcdafa619322c001fce172d3d29450e5d3d53c,PodSandboxId:209b4833115b9ef8224439c5a8fec12e9155896992835ec5286f6d2a8e8a15f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713573797651921097,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2ebac18edd173db2b08c1b57ae6104,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339a729cde4f15511548279f70978ed3269d7198f64ba32a003790f3bb2bd1eb,PodSandboxId:a8c2ccf6a610f7f06ed346ee7c807a16889d78c2d214c682cf8b4de58eab1bb9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713573797663346072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b672f9613f8c5eb6391347653d552133,},Annotations:map[string]string{io
.kubernetes.container.hash: 84e0b7c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8e65c0c15cef8d42afec5611dd88b24133e9f162cd54535518c9f25729dcfc7,PodSandboxId:9fe3a6907a5ae3c1930afaba6a04f92f4977d630ef2fce072aff30eacb46eaa6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713573797591317956,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-059001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c048501d08bb4d4c228b17991d7668e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8f06898-0129-407f-9d7a-72c5072287c9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c373d9ab9916e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   c017c6d333758       busybox-fc5497c4f-xlthm
	b75d3e908c6a1       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   2788e89cb28c4       kindnet-nrhgt
	67e5bf4a09cd8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   fe978d5a63406       coredns-7db6d8ff4d-78rrw
	e0a8a21632a3f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   aaa286949dad8       storage-provisioner
	1de6f3bfc27ff       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      3 minutes ago       Running             kube-proxy                1                   e29602718b918       kube-proxy-blctg
	06122a5f65bc2       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      3 minutes ago       Running             kube-controller-manager   1                   4e9538c73ad90       kube-controller-manager-multinode-059001
	2ec26c00543f4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   3c997b5238fd8       etcd-multinode-059001
	e2f78f6b91aa7       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      3 minutes ago       Running             kube-scheduler            1                   c348aeb6b3c82       kube-scheduler-multinode-059001
	cc8dc5eb92c25       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      3 minutes ago       Running             kube-apiserver            1                   0982ad8753fce       kube-apiserver-multinode-059001
	53fe5ec0ecb57       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   6da2ff71dd49d       busybox-fc5497c4f-xlthm
	f3eb2c3e1d64f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   596fbfbd40b0b       coredns-7db6d8ff4d-78rrw
	965cc419b10f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   355895ccdd2c5       storage-provisioner
	0cdf1f27fc4a7       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      9 minutes ago       Exited              kube-proxy                0                   879a6cbe15b42       kube-proxy-blctg
	278b79bc7d7b5       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      9 minutes ago       Exited              kindnet-cni               0                   7a3bea706d980       kindnet-nrhgt
	e6b41406ce7bb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   0713688d466e5       etcd-multinode-059001
	339a729cde4f1       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      10 minutes ago      Exited              kube-apiserver            0                   a8c2ccf6a610f       kube-apiserver-multinode-059001
	81d365f1385c8       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      10 minutes ago      Exited              kube-controller-manager   0                   209b4833115b9       kube-controller-manager-multinode-059001
	b8e65c0c15cef       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      10 minutes ago      Exited              kube-scheduler            0                   9fe3a6907a5ae       kube-scheduler-multinode-059001
	
	
	==> coredns [67e5bf4a09cd8321581c4874326d57b43cb6b128503e0a7689c8f3f439547696] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59173 - 3994 "HINFO IN 3491305454883615209.2019287686508915915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017089179s
	
	
	==> coredns [f3eb2c3e1d64f749c4a33a890bf03a2469f9d04d15bf36b4abfcedbf37c11d87] <==
	[INFO] 10.244.0.3:56450 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001721997s
	[INFO] 10.244.0.3:55783 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081097s
	[INFO] 10.244.0.3:52177 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127753s
	[INFO] 10.244.0.3:54018 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001195973s
	[INFO] 10.244.0.3:32887 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084527s
	[INFO] 10.244.0.3:51028 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120484s
	[INFO] 10.244.0.3:58042 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069878s
	[INFO] 10.244.1.2:37561 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146597s
	[INFO] 10.244.1.2:39648 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102418s
	[INFO] 10.244.1.2:48176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138976s
	[INFO] 10.244.1.2:35182 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098899s
	[INFO] 10.244.0.3:44206 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167919s
	[INFO] 10.244.0.3:50161 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112368s
	[INFO] 10.244.0.3:38784 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000201148s
	[INFO] 10.244.0.3:54035 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088839s
	[INFO] 10.244.1.2:60259 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194856s
	[INFO] 10.244.1.2:49426 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154222s
	[INFO] 10.244.1.2:43083 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132217s
	[INFO] 10.244.1.2:58393 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011622s
	[INFO] 10.244.0.3:49089 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148632s
	[INFO] 10.244.0.3:38258 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138106s
	[INFO] 10.244.0.3:47064 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079089s
	[INFO] 10.244.0.3:43782 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099575s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-059001
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-059001
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=multinode-059001
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T00_43_24_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:43:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-059001
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:53:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:49:37 +0000   Sat, 20 Apr 2024 00:43:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:49:37 +0000   Sat, 20 Apr 2024 00:43:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:49:37 +0000   Sat, 20 Apr 2024 00:43:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:49:37 +0000   Sat, 20 Apr 2024 00:43:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    multinode-059001
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 151894ac4b6d4e5b9d2a7c732c17d3b5
	  System UUID:                151894ac-4b6d-4e5b-9d2a-7c732c17d3b5
	  Boot ID:                    2762f1af-74dd-4fd3-a09b-5b144bfaea57
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xlthm                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m54s
	  kube-system                 coredns-7db6d8ff4d-78rrw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m46s
	  kube-system                 etcd-multinode-059001                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m59s
	  kube-system                 kindnet-nrhgt                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m46s
	  kube-system                 kube-apiserver-multinode-059001             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m59s
	  kube-system                 kube-controller-manager-multinode-059001    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m59s
	  kube-system                 kube-proxy-blctg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m46s
	  kube-system                 kube-scheduler-multinode-059001             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m59s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m44s                  kube-proxy       
	  Normal  Starting                 3m43s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-059001 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  9m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m59s                  kubelet          Node multinode-059001 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m59s                  kubelet          Node multinode-059001 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m59s                  kubelet          Node multinode-059001 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m59s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m47s                  node-controller  Node multinode-059001 event: Registered Node multinode-059001 in Controller
	  Normal  NodeReady                9m43s                  kubelet          Node multinode-059001 status is now: NodeReady
	  Normal  Starting                 3m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s (x8 over 3m49s)  kubelet          Node multinode-059001 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x8 over 3m49s)  kubelet          Node multinode-059001 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x7 over 3m49s)  kubelet          Node multinode-059001 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m32s                  node-controller  Node multinode-059001 event: Registered Node multinode-059001 in Controller
	
	
	Name:               multinode-059001-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-059001-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=multinode-059001
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_20T00_50_20_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:50:19 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-059001-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:51:00 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 20 Apr 2024 00:50:50 +0000   Sat, 20 Apr 2024 00:51:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 20 Apr 2024 00:50:50 +0000   Sat, 20 Apr 2024 00:51:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 20 Apr 2024 00:50:50 +0000   Sat, 20 Apr 2024 00:51:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 20 Apr 2024 00:50:50 +0000   Sat, 20 Apr 2024 00:51:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.91
	  Hostname:    multinode-059001-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7de15afd65a4b16958d3dab80a21b11
	  System UUID:                d7de15af-d65a-4b16-958d-3dab80a21b11
	  Boot ID:                    3953b031-9153-439e-9151-da703721965e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-srrgg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  kube-system                 kindnet-zfrjl              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m4s
	  kube-system                 kube-proxy-z5zrr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m58s                kube-proxy       
	  Normal  Starting                 8m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  9m4s (x2 over 9m4s)  kubelet          Node multinode-059001-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m4s (x2 over 9m4s)  kubelet          Node multinode-059001-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m4s (x2 over 9m4s)  kubelet          Node multinode-059001-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m56s                kubelet          Node multinode-059001-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m3s (x2 over 3m3s)  kubelet          Node multinode-059001-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x2 over 3m3s)  kubelet          Node multinode-059001-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x2 over 3m3s)  kubelet          Node multinode-059001-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m56s                kubelet          Node multinode-059001-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                 node-controller  Node multinode-059001-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.056282] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071956] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.205630] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.138173] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.306730] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.846277] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.059389] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.319814] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +1.066198] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.491580] systemd-fstab-generator[1287]: Ignoring "noauto" option for root device
	[  +0.091304] kauditd_printk_skb: 30 callbacks suppressed
	[ +14.097624] systemd-fstab-generator[1481]: Ignoring "noauto" option for root device
	[  +0.085979] kauditd_printk_skb: 21 callbacks suppressed
	[Apr20 00:44] kauditd_printk_skb: 84 callbacks suppressed
	[Apr20 00:49] systemd-fstab-generator[2766]: Ignoring "noauto" option for root device
	[  +0.150393] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +0.214637] systemd-fstab-generator[2792]: Ignoring "noauto" option for root device
	[  +0.152379] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.288296] systemd-fstab-generator[2832]: Ignoring "noauto" option for root device
	[  +0.773144] systemd-fstab-generator[2928]: Ignoring "noauto" option for root device
	[  +1.953984] systemd-fstab-generator[3054]: Ignoring "noauto" option for root device
	[  +5.708892] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.735922] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.395805] systemd-fstab-generator[3875]: Ignoring "noauto" option for root device
	[Apr20 00:50] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2ec26c00543f48944316f9c22d404b319e5291332349b3fde5f30beaf6a17766] <==
	{"level":"info","ts":"2024-04-20T00:49:34.473438Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T00:49:34.473452Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T00:49:34.473786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 switched to configuration voters=(1146381907749364645)"}
	{"level":"info","ts":"2024-04-20T00:49:34.473867Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1d37198946ef4128","local-member-id":"fe8c4457455e3a5","added-peer-id":"fe8c4457455e3a5","added-peer-peer-urls":["https://192.168.39.200:2380"]}
	{"level":"info","ts":"2024-04-20T00:49:34.474016Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d37198946ef4128","local-member-id":"fe8c4457455e3a5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:49:34.474069Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:49:34.512175Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-20T00:49:34.517986Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-04-20T00:49:34.518165Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-04-20T00:49:34.519835Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-20T00:49:34.519765Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fe8c4457455e3a5","initial-advertise-peer-urls":["https://192.168.39.200:2380"],"listen-peer-urls":["https://192.168.39.200:2380"],"advertise-client-urls":["https://192.168.39.200:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.200:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-20T00:49:35.713607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-20T00:49:35.713708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-20T00:49:35.713783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 received MsgPreVoteResp from fe8c4457455e3a5 at term 2"}
	{"level":"info","ts":"2024-04-20T00:49:35.713819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became candidate at term 3"}
	{"level":"info","ts":"2024-04-20T00:49:35.713844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 received MsgVoteResp from fe8c4457455e3a5 at term 3"}
	{"level":"info","ts":"2024-04-20T00:49:35.713872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became leader at term 3"}
	{"level":"info","ts":"2024-04-20T00:49:35.713897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe8c4457455e3a5 elected leader fe8c4457455e3a5 at term 3"}
	{"level":"info","ts":"2024-04-20T00:49:35.723194Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-20T00:49:35.725579Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T00:49:35.72564Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T00:49:35.72748Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.200:2379"}
	{"level":"info","ts":"2024-04-20T00:49:35.727949Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T00:49:35.730066Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-20T00:49:35.72302Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fe8c4457455e3a5","local-member-attributes":"{Name:multinode-059001 ClientURLs:[https://192.168.39.200:2379]}","request-path":"/0/members/fe8c4457455e3a5/attributes","cluster-id":"1d37198946ef4128","publish-timeout":"7s"}
	
	
	==> etcd [e6b41406ce7bb57c290c09411bc7850ed947848da5b369d197c7de10f99cc175] <==
	{"level":"info","ts":"2024-04-20T00:43:19.037072Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:43:19.037093Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:43:19.037696Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-20T00:43:19.038672Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.200:2379"}
	{"level":"info","ts":"2024-04-20T00:44:18.303737Z","caller":"traceutil/trace.go:171","msg":"trace[2075837120] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"156.713872ms","start":"2024-04-20T00:44:18.146991Z","end":"2024-04-20T00:44:18.303705Z","steps":["trace[2075837120] 'process raft request'  (duration: 114.461721ms)","trace[2075837120] 'compare'  (duration: 41.874534ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-20T00:44:18.303937Z","caller":"traceutil/trace.go:171","msg":"trace[494060433] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"121.115523ms","start":"2024-04-20T00:44:18.182814Z","end":"2024-04-20T00:44:18.30393Z","steps":["trace[494060433] 'process raft request'  (duration: 120.611625ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:45:04.089959Z","caller":"traceutil/trace.go:171","msg":"trace[1502393717] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"106.066007ms","start":"2024-04-20T00:45:03.983863Z","end":"2024-04-20T00:45:04.089929Z","steps":["trace[1502393717] 'process raft request'  (duration: 105.922806ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:45:04.227931Z","caller":"traceutil/trace.go:171","msg":"trace[2023883580] linearizableReadLoop","detail":"{readStateIndex:615; appliedIndex:614; }","duration":"126.819005ms","start":"2024-04-20T00:45:04.101098Z","end":"2024-04-20T00:45:04.227917Z","steps":["trace[2023883580] 'read index received'  (duration: 25.053873ms)","trace[2023883580] 'applied index is now lower than readState.Index'  (duration: 101.764653ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-20T00:45:04.228054Z","caller":"traceutil/trace.go:171","msg":"trace[582215466] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"130.194148ms","start":"2024-04-20T00:45:04.09785Z","end":"2024-04-20T00:45:04.228045Z","steps":["trace[582215466] 'process raft request'  (duration: 125.225363ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:45:04.228328Z","caller":"traceutil/trace.go:171","msg":"trace[759209653] transaction","detail":"{read_only:false; number_of_response:1; response_revision:585; }","duration":"127.165677ms","start":"2024-04-20T00:45:04.101151Z","end":"2024-04-20T00:45:04.228317Z","steps":["trace[759209653] 'process raft request'  (duration: 126.722624ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:45:04.228353Z","caller":"traceutil/trace.go:171","msg":"trace[1411647333] transaction","detail":"{read_only:false; number_of_response:1; response_revision:585; }","duration":"126.022079ms","start":"2024-04-20T00:45:04.102327Z","end":"2024-04-20T00:45:04.22835Z","steps":["trace[1411647333] 'process raft request'  (duration: 125.571033ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:45:04.228497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.290855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-20T00:45:04.23043Z","caller":"traceutil/trace.go:171","msg":"trace[989438498] range","detail":"{range_begin:/registry/limitranges/kube-system/; range_end:/registry/limitranges/kube-system0; response_count:0; response_revision:585; }","duration":"129.34178ms","start":"2024-04-20T00:45:04.101071Z","end":"2024-04-20T00:45:04.230412Z","steps":["trace[989438498] 'agreement among raft nodes before linearized reading'  (duration: 127.293907ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:45:04.228729Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.918771ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-059001-m03\" ","response":"range_response_count:1 size:2130"}
	{"level":"info","ts":"2024-04-20T00:45:04.2309Z","caller":"traceutil/trace.go:171","msg":"trace[319444847] range","detail":"{range_begin:/registry/minions/multinode-059001-m03; range_end:; response_count:1; response_revision:585; }","duration":"127.114566ms","start":"2024-04-20T00:45:04.103778Z","end":"2024-04-20T00:45:04.230893Z","steps":["trace[319444847] 'agreement among raft nodes before linearized reading'  (duration: 124.923285ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:47:57.927814Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-20T00:47:57.928064Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-059001","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.200:2380"],"advertise-client-urls":["https://192.168.39.200:2379"]}
	{"level":"warn","ts":"2024-04-20T00:47:57.928289Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-20T00:47:57.928402Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-20T00:47:58.021163Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.200:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-20T00:47:58.021222Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.200:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-20T00:47:58.022787Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"fe8c4457455e3a5","current-leader-member-id":"fe8c4457455e3a5"}
	{"level":"info","ts":"2024-04-20T00:47:58.025275Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-04-20T00:47:58.025457Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-04-20T00:47:58.025493Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-059001","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.200:2380"],"advertise-client-urls":["https://192.168.39.200:2379"]}
	
	
	==> kernel <==
	 00:53:23 up 10 min,  0 users,  load average: 0.63, 0.50, 0.29
	Linux multinode-059001 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [278b79bc7d7b5493e7659e02415efc2edf732ce4e95961ab096cf68068cb2c95] <==
	I0420 00:47:09.267090       1 main.go:250] Node multinode-059001-m03 has CIDR [10.244.3.0/24] 
	I0420 00:47:19.280953       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:47:19.281097       1 main.go:227] handling current node
	I0420 00:47:19.281120       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:47:19.281138       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:47:19.281270       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0420 00:47:19.281294       1 main.go:250] Node multinode-059001-m03 has CIDR [10.244.3.0/24] 
	I0420 00:47:29.286576       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:47:29.286687       1 main.go:227] handling current node
	I0420 00:47:29.286716       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:47:29.286735       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:47:29.286896       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0420 00:47:29.286921       1 main.go:250] Node multinode-059001-m03 has CIDR [10.244.3.0/24] 
	I0420 00:47:39.293354       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:47:39.293593       1 main.go:227] handling current node
	I0420 00:47:39.293681       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:47:39.293708       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:47:39.293853       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0420 00:47:39.293873       1 main.go:250] Node multinode-059001-m03 has CIDR [10.244.3.0/24] 
	I0420 00:47:49.300018       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:47:49.300070       1 main.go:227] handling current node
	I0420 00:47:49.300080       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:47:49.300096       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:47:49.300686       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0420 00:47:49.300772       1 main.go:250] Node multinode-059001-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [b75d3e908c6a1d767bee0f970657c6dd2ec7c785094ee7e8174e8b6bead9eb35] <==
	I0420 00:52:20.186659       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:52:30.194353       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:52:30.194427       1 main.go:227] handling current node
	I0420 00:52:30.194450       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:52:30.194457       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:52:40.237877       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:52:40.237959       1 main.go:227] handling current node
	I0420 00:52:40.237981       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:52:40.237998       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:52:50.334051       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:52:50.334138       1 main.go:227] handling current node
	I0420 00:52:50.334149       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:52:50.334156       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:53:00.346046       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:53:00.346095       1 main.go:227] handling current node
	I0420 00:53:00.346109       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:53:00.346115       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:53:10.358323       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:53:10.358382       1 main.go:227] handling current node
	I0420 00:53:10.358392       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:53:10.358399       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	I0420 00:53:20.363076       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0420 00:53:20.363220       1 main.go:227] handling current node
	I0420 00:53:20.363242       1 main.go:223] Handling node with IPs: map[192.168.39.91:{}]
	I0420 00:53:20.363260       1 main.go:250] Node multinode-059001-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [339a729cde4f15511548279f70978ed3269d7198f64ba32a003790f3bb2bd1eb] <==
	W0420 00:47:57.955107       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955169       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955219       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955271       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955321       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955368       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955411       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955468       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955517       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955697       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955758       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955805       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955859       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955922       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.955974       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956018       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956145       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956197       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956242       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956293       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956336       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956380       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956427       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.956471       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 00:47:57.957744       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [cc8dc5eb92c25a81376e5bc22d48ea950cfab5d2a9e85631f1f9dce9014b8ec2] <==
	I0420 00:49:37.239501       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0420 00:49:37.247700       1 aggregator.go:165] initial CRD sync complete...
	I0420 00:49:37.247738       1 autoregister_controller.go:141] Starting autoregister controller
	I0420 00:49:37.247745       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0420 00:49:37.247750       1 cache.go:39] Caches are synced for autoregister controller
	I0420 00:49:37.256964       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0420 00:49:37.265784       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0420 00:49:37.265821       1 policy_source.go:224] refreshing policies
	I0420 00:49:37.334036       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0420 00:49:37.334177       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0420 00:49:37.334214       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0420 00:49:37.334412       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0420 00:49:37.342101       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0420 00:49:37.352712       1 shared_informer.go:320] Caches are synced for configmaps
	I0420 00:49:37.352789       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0420 00:49:37.363627       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0420 00:49:37.378284       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0420 00:49:38.159063       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0420 00:49:39.676206       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0420 00:49:39.845794       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0420 00:49:39.873881       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0420 00:49:39.960050       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0420 00:49:39.971147       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0420 00:49:50.278264       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0420 00:49:50.328180       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [06122a5f65bc233da5812210b7739fd4d498d9c11f7b786cff2e2574315b535b] <==
	I0420 00:50:19.552015       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-059001-m02" podCIDRs=["10.244.1.0/24"]
	I0420 00:50:20.984495       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.999µs"
	I0420 00:50:21.455105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.323µs"
	I0420 00:50:21.484101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.536µs"
	I0420 00:50:21.492916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.552µs"
	I0420 00:50:21.502048       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.905µs"
	I0420 00:50:21.511453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.48µs"
	I0420 00:50:21.516994       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.517µs"
	I0420 00:50:26.990003       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:50:27.008756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.428µs"
	I0420 00:50:27.024679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.737µs"
	I0420 00:50:28.922914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.386877ms"
	I0420 00:50:28.923023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.556µs"
	I0420 00:50:46.607357       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:50:47.729476       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:50:47.730233       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-059001-m03\" does not exist"
	I0420 00:50:47.749261       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-059001-m03" podCIDRs=["10.244.2.0/24"]
	I0420 00:50:55.282187       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:51:01.084595       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:51:40.153687       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.969413ms"
	I0420 00:51:40.153783       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.035µs"
	I0420 00:51:50.034722       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-vx26z"
	I0420 00:51:50.070497       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-vx26z"
	I0420 00:51:50.070620       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-mwlh8"
	I0420 00:51:50.100150       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-mwlh8"
	
	
	==> kube-controller-manager [81d365f1385c877c7c0e983fc2fcdafa619322c001fce172d3d29450e5d3d53c] <==
	I0420 00:44:18.312629       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-059001-m02\" does not exist"
	I0420 00:44:18.367629       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-059001-m02" podCIDRs=["10.244.1.0/24"]
	I0420 00:44:20.587140       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-059001-m02"
	I0420 00:44:26.402499       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:44:28.692891       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.409928ms"
	I0420 00:44:28.733325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.255008ms"
	I0420 00:44:28.752956       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.245075ms"
	I0420 00:44:28.753058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.3µs"
	I0420 00:44:30.641246       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.023818ms"
	I0420 00:44:30.642319       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.451µs"
	I0420 00:44:31.200469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.951875ms"
	I0420 00:44:31.201327       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.449µs"
	I0420 00:45:04.097960       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-059001-m03\" does not exist"
	I0420 00:45:04.098062       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:45:04.256007       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-059001-m03" podCIDRs=["10.244.2.0/24"]
	I0420 00:45:05.606169       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-059001-m03"
	I0420 00:45:12.862204       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:45:44.174863       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:45:45.266896       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:45:45.267238       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-059001-m03\" does not exist"
	I0420 00:45:45.290204       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-059001-m03" podCIDRs=["10.244.3.0/24"]
	I0420 00:45:51.455304       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m02"
	I0420 00:46:30.669911       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-059001-m03"
	I0420 00:46:30.718977       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.298661ms"
	I0420 00:46:30.719643       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="135.718µs"
	
	
	==> kube-proxy [0cdf1f27fc4a775bdb1bd07aca352e8375ad5df4a0ac4f9844f6731ab60ba0fa] <==
	I0420 00:43:38.521708       1 server_linux.go:69] "Using iptables proxy"
	I0420 00:43:38.529870       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.200"]
	I0420 00:43:38.585053       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 00:43:38.585089       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 00:43:38.585104       1 server_linux.go:165] "Using iptables Proxier"
	I0420 00:43:38.588245       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 00:43:38.588489       1 server.go:872] "Version info" version="v1.30.0"
	I0420 00:43:38.588635       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:43:38.591196       1 config.go:192] "Starting service config controller"
	I0420 00:43:38.591241       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 00:43:38.591282       1 config.go:101] "Starting endpoint slice config controller"
	I0420 00:43:38.591298       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 00:43:38.592148       1 config.go:319] "Starting node config controller"
	I0420 00:43:38.592196       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 00:43:38.691759       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 00:43:38.691825       1 shared_informer.go:320] Caches are synced for service config
	I0420 00:43:38.692309       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [1de6f3bfc27ff52e665aeea65f29380aacd63f3616dad1947aba138059bf66af] <==
	I0420 00:49:39.221428       1 server_linux.go:69] "Using iptables proxy"
	I0420 00:49:39.250353       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.200"]
	I0420 00:49:39.425721       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 00:49:39.425749       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 00:49:39.425764       1 server_linux.go:165] "Using iptables Proxier"
	I0420 00:49:39.437086       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 00:49:39.437253       1 server.go:872] "Version info" version="v1.30.0"
	I0420 00:49:39.437268       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:49:39.441018       1 config.go:192] "Starting service config controller"
	I0420 00:49:39.441035       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 00:49:39.441151       1 config.go:101] "Starting endpoint slice config controller"
	I0420 00:49:39.441157       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 00:49:39.452491       1 config.go:319] "Starting node config controller"
	I0420 00:49:39.453252       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 00:49:39.543860       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 00:49:39.543903       1 shared_informer.go:320] Caches are synced for service config
	I0420 00:49:39.554081       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b8e65c0c15cef8d42afec5611dd88b24133e9f162cd54535518c9f25729dcfc7] <==
	E0420 00:43:20.404346       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0420 00:43:20.403314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 00:43:20.404941       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 00:43:20.403603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0420 00:43:20.405070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0420 00:43:20.404050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0420 00:43:20.405215       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0420 00:43:20.404101       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0420 00:43:20.405340       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0420 00:43:21.231996       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0420 00:43:21.232105       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0420 00:43:21.410618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0420 00:43:21.411653       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0420 00:43:21.513679       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0420 00:43:21.513737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0420 00:43:21.521173       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0420 00:43:21.521225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0420 00:43:21.555077       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0420 00:43:21.555139       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0420 00:43:21.619679       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0420 00:43:21.619733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0420 00:43:21.665241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0420 00:43:21.665293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0420 00:43:21.992874       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0420 00:47:57.936818       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e2f78f6b91aa74df6a71fda29b2aa790b8049ec4615015da4cbff4961fca992a] <==
	I0420 00:49:34.962140       1 serving.go:380] Generated self-signed cert in-memory
	W0420 00:49:37.239991       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0420 00:49:37.243632       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 00:49:37.243782       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0420 00:49:37.243812       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0420 00:49:37.263889       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0420 00:49:37.264760       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:49:37.267032       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0420 00:49:37.267477       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0420 00:49:37.267940       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0420 00:49:37.267784       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0420 00:49:37.368245       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.074757    3061 topology_manager.go:215] "Topology Admit Handler" podUID="139b40e9-a1ec-4035-88a9-e382b2ee6293" podNamespace="kube-system" podName="storage-provisioner"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.074833    3061 topology_manager.go:215] "Topology Admit Handler" podUID="cecb2998-715e-4d88-bea0-1cbece396619" podNamespace="default" podName="busybox-fc5497c4f-xlthm"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.085988    3061 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.129985    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc879522-987c-4e38-bdb1-949a9d934334-lib-modules\") pod \"kindnet-nrhgt\" (UID: \"dc879522-987c-4e38-bdb1-949a9d934334\") " pod="kube-system/kindnet-nrhgt"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.130061    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dc879522-987c-4e38-bdb1-949a9d934334-cni-cfg\") pod \"kindnet-nrhgt\" (UID: \"dc879522-987c-4e38-bdb1-949a9d934334\") " pod="kube-system/kindnet-nrhgt"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.130081    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc879522-987c-4e38-bdb1-949a9d934334-xtables-lock\") pod \"kindnet-nrhgt\" (UID: \"dc879522-987c-4e38-bdb1-949a9d934334\") " pod="kube-system/kindnet-nrhgt"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.130135    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64ab7435-a9ee-432d-8b87-58c3e4c7a147-xtables-lock\") pod \"kube-proxy-blctg\" (UID: \"64ab7435-a9ee-432d-8b87-58c3e4c7a147\") " pod="kube-system/kube-proxy-blctg"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.130148    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64ab7435-a9ee-432d-8b87-58c3e4c7a147-lib-modules\") pod \"kube-proxy-blctg\" (UID: \"64ab7435-a9ee-432d-8b87-58c3e4c7a147\") " pod="kube-system/kube-proxy-blctg"
	Apr 20 00:49:38 multinode-059001 kubelet[3061]: I0420 00:49:38.130175    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/139b40e9-a1ec-4035-88a9-e382b2ee6293-tmp\") pod \"storage-provisioner\" (UID: \"139b40e9-a1ec-4035-88a9-e382b2ee6293\") " pod="kube-system/storage-provisioner"
	Apr 20 00:49:45 multinode-059001 kubelet[3061]: I0420 00:49:45.208322    3061 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 20 00:50:33 multinode-059001 kubelet[3061]: E0420 00:50:33.210126    3061 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:50:33 multinode-059001 kubelet[3061]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:50:33 multinode-059001 kubelet[3061]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:50:33 multinode-059001 kubelet[3061]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:50:33 multinode-059001 kubelet[3061]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:51:33 multinode-059001 kubelet[3061]: E0420 00:51:33.205577    3061 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:51:33 multinode-059001 kubelet[3061]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:51:33 multinode-059001 kubelet[3061]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:51:33 multinode-059001 kubelet[3061]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:51:33 multinode-059001 kubelet[3061]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:52:33 multinode-059001 kubelet[3061]: E0420 00:52:33.213242    3061 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:52:33 multinode-059001 kubelet[3061]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:52:33 multinode-059001 kubelet[3061]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:52:33 multinode-059001 kubelet[3061]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:52:33 multinode-059001 kubelet[3061]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 00:53:22.108769  114384 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18703-76456/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-059001 -n multinode-059001
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-059001 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.65s)

                                                
                                    
x
+
TestPreload (277.34s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-064291 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0420 00:58:11.657924   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-064291 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m16.323699754s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-064291 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-064291 image pull gcr.io/k8s-minikube/busybox: (1.148896432s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-064291
E0420 01:00:10.863742   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 01:00:27.814924   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-064291: exit status 82 (2m0.475677073s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-064291"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-064291 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-04-20 01:01:25.522725056 +0000 UTC m=+3859.037592447
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-064291 -n test-preload-064291
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-064291 -n test-preload-064291: exit status 3 (18.492070647s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 01:01:44.009695  117306 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.94:22: connect: no route to host
	E0420 01:01:44.009720  117306 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.94:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-064291" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-064291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-064291
--- FAIL: TestPreload (277.34s)

                                                
                                    
x
+
TestKubernetesUpgrade (443.94s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-345460 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-345460 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m59.074193405s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-345460] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-345460" primary control-plane node in "kubernetes-upgrade-345460" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 01:06:44.216607  123271 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:06:44.216705  123271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:06:44.216716  123271 out.go:304] Setting ErrFile to fd 2...
	I0420 01:06:44.216724  123271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:06:44.216955  123271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:06:44.217614  123271 out.go:298] Setting JSON to false
	I0420 01:06:44.218522  123271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":13751,"bootTime":1713561453,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 01:06:44.218577  123271 start.go:139] virtualization: kvm guest
	I0420 01:06:44.220724  123271 out.go:177] * [kubernetes-upgrade-345460] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 01:06:44.222117  123271 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:06:44.222119  123271 notify.go:220] Checking for updates...
	I0420 01:06:44.223494  123271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:06:44.225191  123271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:06:44.226472  123271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:06:44.227620  123271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 01:06:44.228708  123271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:06:44.230317  123271 config.go:182] Loaded profile config "NoKubernetes-254901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0420 01:06:44.230405  123271 config.go:182] Loaded profile config "cert-expiration-692221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:06:44.230486  123271 config.go:182] Loaded profile config "running-upgrade-981367": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0420 01:06:44.230567  123271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:06:44.267940  123271 out.go:177] * Using the kvm2 driver based on user configuration
	I0420 01:06:44.269095  123271 start.go:297] selected driver: kvm2
	I0420 01:06:44.269106  123271 start.go:901] validating driver "kvm2" against <nil>
	I0420 01:06:44.269117  123271 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:06:44.269813  123271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:06:44.269877  123271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 01:06:44.284546  123271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 01:06:44.284593  123271 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0420 01:06:44.284776  123271 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0420 01:06:44.284832  123271 cni.go:84] Creating CNI manager for ""
	I0420 01:06:44.284845  123271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:06:44.284852  123271 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0420 01:06:44.284899  123271 start.go:340] cluster config:
	{Name:kubernetes-upgrade-345460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-345460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:06:44.284983  123271 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:06:44.286636  123271 out.go:177] * Starting "kubernetes-upgrade-345460" primary control-plane node in "kubernetes-upgrade-345460" cluster
	I0420 01:06:44.287852  123271 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:06:44.287882  123271 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0420 01:06:44.287895  123271 cache.go:56] Caching tarball of preloaded images
	I0420 01:06:44.287985  123271 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 01:06:44.288000  123271 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0420 01:06:44.288113  123271 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/config.json ...
	I0420 01:06:44.288135  123271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/config.json: {Name:mkab9178d6738e60361a365de7b35e067d6c4524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:06:44.288261  123271 start.go:360] acquireMachinesLock for kubernetes-upgrade-345460: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:07:10.998222  123271 start.go:364] duration metric: took 26.709929095s to acquireMachinesLock for "kubernetes-upgrade-345460"
	I0420 01:07:10.998304  123271 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-345460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-345460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:07:10.998444  123271 start.go:125] createHost starting for "" (driver="kvm2")
	I0420 01:07:11.001406  123271 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0420 01:07:11.001640  123271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:07:11.001697  123271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:07:11.018715  123271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46423
	I0420 01:07:11.019227  123271 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:07:11.019847  123271 main.go:141] libmachine: Using API Version  1
	I0420 01:07:11.019873  123271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:07:11.020611  123271 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:07:11.022276  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetMachineName
	I0420 01:07:11.022472  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:07:11.022708  123271 start.go:159] libmachine.API.Create for "kubernetes-upgrade-345460" (driver="kvm2")
	I0420 01:07:11.022752  123271 client.go:168] LocalClient.Create starting
	I0420 01:07:11.022790  123271 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem
	I0420 01:07:11.022830  123271 main.go:141] libmachine: Decoding PEM data...
	I0420 01:07:11.022852  123271 main.go:141] libmachine: Parsing certificate...
	I0420 01:07:11.022939  123271 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem
	I0420 01:07:11.022966  123271 main.go:141] libmachine: Decoding PEM data...
	I0420 01:07:11.022982  123271 main.go:141] libmachine: Parsing certificate...
	I0420 01:07:11.023009  123271 main.go:141] libmachine: Running pre-create checks...
	I0420 01:07:11.023023  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .PreCreateCheck
	I0420 01:07:11.023367  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetConfigRaw
	I0420 01:07:11.023783  123271 main.go:141] libmachine: Creating machine...
	I0420 01:07:11.023815  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .Create
	I0420 01:07:11.023962  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Creating KVM machine...
	I0420 01:07:11.025191  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found existing default KVM network
	I0420 01:07:11.026505  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:11.026345  123775 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f5:f0:d0} reservation:<nil>}
	I0420 01:07:11.027633  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:11.027541  123775 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010fd10}
	I0420 01:07:11.027653  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | created network xml: 
	I0420 01:07:11.027665  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | <network>
	I0420 01:07:11.027674  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG |   <name>mk-kubernetes-upgrade-345460</name>
	I0420 01:07:11.027688  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG |   <dns enable='no'/>
	I0420 01:07:11.027699  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG |   
	I0420 01:07:11.027721  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0420 01:07:11.027745  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG |     <dhcp>
	I0420 01:07:11.027755  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0420 01:07:11.027768  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG |     </dhcp>
	I0420 01:07:11.027781  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG |   </ip>
	I0420 01:07:11.027790  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG |   
	I0420 01:07:11.027802  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | </network>
	I0420 01:07:11.027816  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | 
	I0420 01:07:11.033264  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | trying to create private KVM network mk-kubernetes-upgrade-345460 192.168.50.0/24...
	I0420 01:07:11.104640  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | private KVM network mk-kubernetes-upgrade-345460 192.168.50.0/24 created
	I0420 01:07:11.104682  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Setting up store path in /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460 ...
	I0420 01:07:11.104698  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:11.104607  123775 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:07:11.104710  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Building disk image from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0420 01:07:11.104814  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Downloading /home/jenkins/minikube-integration/18703-76456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0420 01:07:11.335067  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:11.334916  123775 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460/id_rsa...
	I0420 01:07:11.558769  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:11.558622  123775 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460/kubernetes-upgrade-345460.rawdisk...
	I0420 01:07:11.558803  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | Writing magic tar header
	I0420 01:07:11.558819  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | Writing SSH key tar header
	I0420 01:07:11.558833  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:11.558753  123775 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460 ...
	I0420 01:07:11.558850  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460
	I0420 01:07:11.558905  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460 (perms=drwx------)
	I0420 01:07:11.558930  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines (perms=drwxr-xr-x)
	I0420 01:07:11.558948  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube (perms=drwxr-xr-x)
	I0420 01:07:11.558962  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456 (perms=drwxrwxr-x)
	I0420 01:07:11.558976  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0420 01:07:11.559007  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0420 01:07:11.559023  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines
	I0420 01:07:11.559043  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Creating domain...
	I0420 01:07:11.559059  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:07:11.559076  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456
	I0420 01:07:11.559095  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0420 01:07:11.559110  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | Checking permissions on dir: /home/jenkins
	I0420 01:07:11.559122  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | Checking permissions on dir: /home
	I0420 01:07:11.559138  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | Skipping /home - not owner
	I0420 01:07:11.560063  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) define libvirt domain using xml: 
	I0420 01:07:11.560088  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) <domain type='kvm'>
	I0420 01:07:11.560098  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)   <name>kubernetes-upgrade-345460</name>
	I0420 01:07:11.560110  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)   <memory unit='MiB'>2200</memory>
	I0420 01:07:11.560119  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)   <vcpu>2</vcpu>
	I0420 01:07:11.560130  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)   <features>
	I0420 01:07:11.560140  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     <acpi/>
	I0420 01:07:11.560152  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     <apic/>
	I0420 01:07:11.560168  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     <pae/>
	I0420 01:07:11.560180  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     
	I0420 01:07:11.560209  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)   </features>
	I0420 01:07:11.560234  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)   <cpu mode='host-passthrough'>
	I0420 01:07:11.560243  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)   
	I0420 01:07:11.560255  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)   </cpu>
	I0420 01:07:11.560268  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)   <os>
	I0420 01:07:11.560279  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     <type>hvm</type>
	I0420 01:07:11.560291  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     <boot dev='cdrom'/>
	I0420 01:07:11.560301  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     <boot dev='hd'/>
	I0420 01:07:11.560335  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     <bootmenu enable='no'/>
	I0420 01:07:11.560361  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)   </os>
	I0420 01:07:11.560373  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)   <devices>
	I0420 01:07:11.560386  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     <disk type='file' device='cdrom'>
	I0420 01:07:11.560404  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460/boot2docker.iso'/>
	I0420 01:07:11.560416  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)       <target dev='hdc' bus='scsi'/>
	I0420 01:07:11.560428  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)       <readonly/>
	I0420 01:07:11.560438  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     </disk>
	I0420 01:07:11.560454  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     <disk type='file' device='disk'>
	I0420 01:07:11.560471  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0420 01:07:11.560488  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460/kubernetes-upgrade-345460.rawdisk'/>
	I0420 01:07:11.560501  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)       <target dev='hda' bus='virtio'/>
	I0420 01:07:11.560511  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     </disk>
	I0420 01:07:11.560525  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     <interface type='network'>
	I0420 01:07:11.560547  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)       <source network='mk-kubernetes-upgrade-345460'/>
	I0420 01:07:11.560561  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)       <model type='virtio'/>
	I0420 01:07:11.560573  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     </interface>
	I0420 01:07:11.560587  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     <interface type='network'>
	I0420 01:07:11.560600  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)       <source network='default'/>
	I0420 01:07:11.560613  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)       <model type='virtio'/>
	I0420 01:07:11.560628  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     </interface>
	I0420 01:07:11.560640  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     <serial type='pty'>
	I0420 01:07:11.560651  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)       <target port='0'/>
	I0420 01:07:11.560663  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     </serial>
	I0420 01:07:11.560675  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     <console type='pty'>
	I0420 01:07:11.560689  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)       <target type='serial' port='0'/>
	I0420 01:07:11.560704  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     </console>
	I0420 01:07:11.560717  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     <rng model='virtio'>
	I0420 01:07:11.560729  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)       <backend model='random'>/dev/random</backend>
	I0420 01:07:11.560742  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     </rng>
	I0420 01:07:11.560750  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     
	I0420 01:07:11.560762  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)     
	I0420 01:07:11.560778  123271 main.go:141] libmachine: (kubernetes-upgrade-345460)   </devices>
	I0420 01:07:11.560794  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) </domain>
	I0420 01:07:11.560805  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) 
	I0420 01:07:11.565565  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:8d:dc:fe in network default
	I0420 01:07:11.566251  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Ensuring networks are active...
	I0420 01:07:11.566275  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:11.566971  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Ensuring network default is active
	I0420 01:07:11.567292  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Ensuring network mk-kubernetes-upgrade-345460 is active
	I0420 01:07:11.567782  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Getting domain xml...
	I0420 01:07:11.568621  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Creating domain...
	I0420 01:07:12.774737  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Waiting to get IP...
	I0420 01:07:12.775443  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:12.775938  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:07:12.776005  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:12.775928  123775 retry.go:31] will retry after 201.342212ms: waiting for machine to come up
	I0420 01:07:12.979393  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:12.979929  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:07:12.979963  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:12.979833  123775 retry.go:31] will retry after 344.481716ms: waiting for machine to come up
	I0420 01:07:13.326361  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:13.326800  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:07:13.326831  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:13.326757  123775 retry.go:31] will retry after 312.29983ms: waiting for machine to come up
	I0420 01:07:13.640254  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:13.640747  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:07:13.640778  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:13.640692  123775 retry.go:31] will retry after 366.422725ms: waiting for machine to come up
	I0420 01:07:14.008158  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:14.008596  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:07:14.008628  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:14.008552  123775 retry.go:31] will retry after 715.793753ms: waiting for machine to come up
	I0420 01:07:14.726436  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:14.726937  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:07:14.726973  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:14.726875  123775 retry.go:31] will retry after 938.302698ms: waiting for machine to come up
	I0420 01:07:15.666578  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:15.667250  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:07:15.667289  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:15.667200  123775 retry.go:31] will retry after 1.092462086s: waiting for machine to come up
	I0420 01:07:16.761488  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:16.761954  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:07:16.761978  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:16.761913  123775 retry.go:31] will retry after 1.154055078s: waiting for machine to come up
	I0420 01:07:17.917879  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:17.918389  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:07:17.918426  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:17.918305  123775 retry.go:31] will retry after 1.505473211s: waiting for machine to come up
	I0420 01:07:19.426192  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:19.426768  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:07:19.426840  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:19.426728  123775 retry.go:31] will retry after 1.516380623s: waiting for machine to come up
	I0420 01:07:20.945681  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:20.946263  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:07:20.946287  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:20.946210  123775 retry.go:31] will retry after 1.928405015s: waiting for machine to come up
	I0420 01:07:22.875914  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:22.876475  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:07:22.876502  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:22.876420  123775 retry.go:31] will retry after 2.955533601s: waiting for machine to come up
	I0420 01:07:25.834605  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:25.835080  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:07:25.835110  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:25.835051  123775 retry.go:31] will retry after 3.380721216s: waiting for machine to come up
	I0420 01:07:29.218661  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:29.219788  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:07:29.219815  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:07:29.219070  123775 retry.go:31] will retry after 5.087888775s: waiting for machine to come up
	I0420 01:07:34.310499  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:34.310995  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Found IP for machine: 192.168.50.68
	I0420 01:07:34.311029  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has current primary IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:34.311042  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Reserving static IP address...
	I0420 01:07:34.311366  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-345460", mac: "52:54:00:d3:00:79", ip: "192.168.50.68"} in network mk-kubernetes-upgrade-345460
	I0420 01:07:34.384717  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | Getting to WaitForSSH function...
	I0420 01:07:34.384756  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Reserved static IP address: 192.168.50.68
	I0420 01:07:34.384773  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Waiting for SSH to be available...
	I0420 01:07:34.387837  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:34.388315  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:07:27 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d3:00:79}
	I0420 01:07:34.388346  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:34.388522  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | Using SSH client type: external
	I0420 01:07:34.388556  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460/id_rsa (-rw-------)
	I0420 01:07:34.388592  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:07:34.388607  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | About to run SSH command:
	I0420 01:07:34.388626  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | exit 0
	I0420 01:07:34.518245  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | SSH cmd err, output: <nil>: 
	I0420 01:07:34.518554  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) KVM machine creation complete!
	I0420 01:07:34.518848  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetConfigRaw
	I0420 01:07:34.519525  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:07:34.519734  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:07:34.519907  123271 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0420 01:07:34.519925  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetState
	I0420 01:07:34.521221  123271 main.go:141] libmachine: Detecting operating system of created instance...
	I0420 01:07:34.521235  123271 main.go:141] libmachine: Waiting for SSH to be available...
	I0420 01:07:34.521241  123271 main.go:141] libmachine: Getting to WaitForSSH function...
	I0420 01:07:34.521247  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:07:34.524189  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:34.524553  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:07:27 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:07:34.524587  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:34.524751  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:07:34.524937  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:07:34.525093  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:07:34.525240  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:07:34.525451  123271 main.go:141] libmachine: Using SSH client type: native
	I0420 01:07:34.525721  123271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I0420 01:07:34.525746  123271 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0420 01:07:34.629086  123271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:07:34.629111  123271 main.go:141] libmachine: Detecting the provisioner...
	I0420 01:07:34.629124  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:07:34.631994  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:34.632342  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:07:27 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:07:34.632374  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:34.632512  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:07:34.632778  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:07:34.632960  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:07:34.633158  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:07:34.633420  123271 main.go:141] libmachine: Using SSH client type: native
	I0420 01:07:34.633620  123271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I0420 01:07:34.633631  123271 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0420 01:07:34.738870  123271 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0420 01:07:34.738956  123271 main.go:141] libmachine: found compatible host: buildroot
	I0420 01:07:34.738970  123271 main.go:141] libmachine: Provisioning with buildroot...
	I0420 01:07:34.738985  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetMachineName
	I0420 01:07:34.739278  123271 buildroot.go:166] provisioning hostname "kubernetes-upgrade-345460"
	I0420 01:07:34.739312  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetMachineName
	I0420 01:07:34.739559  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:07:34.742516  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:34.743015  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:07:27 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:07:34.743047  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:34.743258  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:07:34.743464  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:07:34.743634  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:07:34.743794  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:07:34.743982  123271 main.go:141] libmachine: Using SSH client type: native
	I0420 01:07:34.744212  123271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I0420 01:07:34.744230  123271 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-345460 && echo "kubernetes-upgrade-345460" | sudo tee /etc/hostname
	I0420 01:07:34.869370  123271 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-345460
	
	I0420 01:07:34.869414  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:07:34.872202  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:34.872556  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:07:27 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:07:34.872589  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:34.872815  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:07:34.873048  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:07:34.873286  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:07:34.873499  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:07:34.873703  123271 main.go:141] libmachine: Using SSH client type: native
	I0420 01:07:34.873882  123271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I0420 01:07:34.873902  123271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-345460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-345460/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-345460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:07:34.983965  123271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:07:34.983992  123271 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:07:34.984012  123271 buildroot.go:174] setting up certificates
	I0420 01:07:34.984020  123271 provision.go:84] configureAuth start
	I0420 01:07:34.984029  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetMachineName
	I0420 01:07:34.984321  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetIP
	I0420 01:07:34.987090  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:34.987456  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:07:27 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:07:34.987490  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:34.987614  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:07:34.989912  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:34.990317  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:07:27 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:07:34.990346  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:34.990567  123271 provision.go:143] copyHostCerts
	I0420 01:07:34.990634  123271 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:07:34.990649  123271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:07:34.990718  123271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:07:34.990857  123271 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:07:34.990868  123271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:07:34.990896  123271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:07:34.991028  123271 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:07:34.991042  123271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:07:34.991068  123271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:07:34.991135  123271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-345460 san=[127.0.0.1 192.168.50.68 kubernetes-upgrade-345460 localhost minikube]
	I0420 01:07:35.209018  123271 provision.go:177] copyRemoteCerts
	I0420 01:07:35.209087  123271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:07:35.209124  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:07:35.212344  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.212752  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:07:27 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:07:35.212782  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.212932  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:07:35.213137  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:07:35.213284  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:07:35.213458  123271 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460/id_rsa Username:docker}
	I0420 01:07:35.306681  123271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:07:35.334331  123271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0420 01:07:35.361610  123271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:07:35.388905  123271 provision.go:87] duration metric: took 404.86909ms to configureAuth
	I0420 01:07:35.388933  123271 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:07:35.389097  123271 config.go:182] Loaded profile config "kubernetes-upgrade-345460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:07:35.389191  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:07:35.392091  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.392439  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:07:27 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:07:35.392521  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.392684  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:07:35.392911  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:07:35.393131  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:07:35.393270  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:07:35.393477  123271 main.go:141] libmachine: Using SSH client type: native
	I0420 01:07:35.393687  123271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I0420 01:07:35.393704  123271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:07:35.678623  123271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:07:35.678653  123271 main.go:141] libmachine: Checking connection to Docker...
	I0420 01:07:35.678666  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetURL
	I0420 01:07:35.680129  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | Using libvirt version 6000000
	I0420 01:07:35.682230  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.682694  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:07:27 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:07:35.682733  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.682939  123271 main.go:141] libmachine: Docker is up and running!
	I0420 01:07:35.682962  123271 main.go:141] libmachine: Reticulating splines...
	I0420 01:07:35.682971  123271 client.go:171] duration metric: took 24.660209982s to LocalClient.Create
	I0420 01:07:35.683001  123271 start.go:167] duration metric: took 24.660295482s to libmachine.API.Create "kubernetes-upgrade-345460"
	I0420 01:07:35.683017  123271 start.go:293] postStartSetup for "kubernetes-upgrade-345460" (driver="kvm2")
	I0420 01:07:35.683031  123271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:07:35.683052  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:07:35.683343  123271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:07:35.683370  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:07:35.685374  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.685749  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:07:27 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:07:35.685796  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.685976  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:07:35.686229  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:07:35.686389  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:07:35.686546  123271 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460/id_rsa Username:docker}
	I0420 01:07:35.774327  123271 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:07:35.779308  123271 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:07:35.779338  123271 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:07:35.779423  123271 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:07:35.779509  123271 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:07:35.779594  123271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:07:35.789879  123271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:07:35.818488  123271 start.go:296] duration metric: took 135.452976ms for postStartSetup
	I0420 01:07:35.818551  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetConfigRaw
	I0420 01:07:35.819199  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetIP
	I0420 01:07:35.822167  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.822519  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:07:27 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:07:35.822546  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.822814  123271 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/config.json ...
	I0420 01:07:35.823024  123271 start.go:128] duration metric: took 24.824564454s to createHost
	I0420 01:07:35.823066  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:07:35.825532  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.825922  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:07:27 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:07:35.825952  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.826140  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:07:35.826375  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:07:35.826556  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:07:35.826713  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:07:35.826963  123271 main.go:141] libmachine: Using SSH client type: native
	I0420 01:07:35.827199  123271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I0420 01:07:35.827219  123271 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0420 01:07:35.930995  123271 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713575255.908526106
	
	I0420 01:07:35.931020  123271 fix.go:216] guest clock: 1713575255.908526106
	I0420 01:07:35.931029  123271 fix.go:229] Guest: 2024-04-20 01:07:35.908526106 +0000 UTC Remote: 2024-04-20 01:07:35.823053735 +0000 UTC m=+51.653717366 (delta=85.472371ms)
	I0420 01:07:35.931049  123271 fix.go:200] guest clock delta is within tolerance: 85.472371ms
	I0420 01:07:35.931054  123271 start.go:83] releasing machines lock for "kubernetes-upgrade-345460", held for 24.932773386s
	I0420 01:07:35.931084  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:07:35.931378  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetIP
	I0420 01:07:35.934221  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.934588  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:07:27 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:07:35.934614  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.934814  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:07:35.935339  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:07:35.935535  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:07:35.935627  123271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:07:35.935671  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:07:35.935788  123271 ssh_runner.go:195] Run: cat /version.json
	I0420 01:07:35.935815  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:07:35.938297  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.938665  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:07:27 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:07:35.938692  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.938712  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.938799  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:07:35.938988  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:07:35.939129  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:07:27 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:07:35.939145  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:07:35.939171  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:35.939323  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:07:35.939350  123271 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460/id_rsa Username:docker}
	I0420 01:07:35.939462  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:07:35.939590  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:07:35.939738  123271 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460/id_rsa Username:docker}
	I0420 01:07:36.028415  123271 ssh_runner.go:195] Run: systemctl --version
	I0420 01:07:36.050859  123271 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:07:36.228028  123271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:07:36.235623  123271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:07:36.235693  123271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:07:36.260208  123271 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:07:36.260241  123271 start.go:494] detecting cgroup driver to use...
	I0420 01:07:36.260316  123271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:07:36.282665  123271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:07:36.300507  123271 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:07:36.300575  123271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:07:36.321951  123271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:07:36.344812  123271 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:07:36.504969  123271 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:07:36.682541  123271 docker.go:233] disabling docker service ...
	I0420 01:07:36.682630  123271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:07:36.702829  123271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:07:36.724510  123271 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:07:36.887654  123271 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:07:37.025337  123271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:07:37.042102  123271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:07:37.066669  123271 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0420 01:07:37.066720  123271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:07:37.079001  123271 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:07:37.079068  123271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:07:37.091292  123271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:07:37.104671  123271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:07:37.120667  123271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:07:37.140766  123271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:07:37.155732  123271 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:07:37.155801  123271 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:07:37.173268  123271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:07:37.187024  123271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:07:37.329188  123271 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:07:37.489983  123271 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:07:37.490077  123271 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:07:37.496114  123271 start.go:562] Will wait 60s for crictl version
	I0420 01:07:37.496186  123271 ssh_runner.go:195] Run: which crictl
	I0420 01:07:37.501162  123271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:07:37.544971  123271 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:07:37.545070  123271 ssh_runner.go:195] Run: crio --version
	I0420 01:07:37.582430  123271 ssh_runner.go:195] Run: crio --version
	I0420 01:07:37.622037  123271 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0420 01:07:37.623336  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetIP
	I0420 01:07:37.626880  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:37.627363  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:07:27 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:07:37.627401  123271 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:07:37.627656  123271 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0420 01:07:37.633870  123271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:07:37.650998  123271 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-345460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-345460 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:07:37.651121  123271 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:07:37.651185  123271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:07:37.688642  123271 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:07:37.688729  123271 ssh_runner.go:195] Run: which lz4
	I0420 01:07:37.693707  123271 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0420 01:07:37.699245  123271 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:07:37.699284  123271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0420 01:07:40.032770  123271 crio.go:462] duration metric: took 2.339100438s to copy over tarball
	I0420 01:07:40.032862  123271 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:07:43.322482  123271 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.28955642s)
	I0420 01:07:43.322522  123271 crio.go:469] duration metric: took 3.289711819s to extract the tarball
	I0420 01:07:43.322533  123271 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:07:43.370452  123271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:07:43.436815  123271 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:07:43.436841  123271 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0420 01:07:43.436914  123271 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:07:43.436982  123271 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:07:43.436987  123271 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0420 01:07:43.436955  123271 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:07:43.436914  123271 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:07:43.437135  123271 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0420 01:07:43.437079  123271 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:07:43.437070  123271 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:07:43.438784  123271 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:07:43.438808  123271 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0420 01:07:43.438845  123271 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:07:43.438806  123271 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:07:43.438868  123271 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:07:43.438866  123271 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:07:43.438913  123271 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:07:43.439049  123271 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0420 01:07:43.605172  123271 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:07:43.605662  123271 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0420 01:07:43.620209  123271 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0420 01:07:43.630079  123271 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:07:43.664059  123271 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:07:43.686296  123271 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0420 01:07:43.694997  123271 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0420 01:07:43.695049  123271 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:07:43.695115  123271 ssh_runner.go:195] Run: which crictl
	I0420 01:07:43.702453  123271 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0420 01:07:43.702497  123271 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:07:43.702537  123271 ssh_runner.go:195] Run: which crictl
	I0420 01:07:43.748056  123271 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0420 01:07:43.748107  123271 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0420 01:07:43.748154  123271 ssh_runner.go:195] Run: which crictl
	I0420 01:07:43.752904  123271 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:07:43.765763  123271 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:07:43.768804  123271 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0420 01:07:43.768852  123271 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:07:43.768930  123271 ssh_runner.go:195] Run: which crictl
	I0420 01:07:43.799658  123271 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0420 01:07:43.799716  123271 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:07:43.799788  123271 ssh_runner.go:195] Run: which crictl
	I0420 01:07:43.833224  123271 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0420 01:07:43.833268  123271 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0420 01:07:43.833332  123271 ssh_runner.go:195] Run: which crictl
	I0420 01:07:43.833462  123271 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:07:43.833561  123271 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0420 01:07:43.833632  123271 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0420 01:07:43.898507  123271 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0420 01:07:43.898568  123271 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:07:43.898620  123271 ssh_runner.go:195] Run: which crictl
	I0420 01:07:43.997691  123271 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:07:43.997711  123271 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:07:43.997759  123271 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0420 01:07:43.997833  123271 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0420 01:07:43.997901  123271 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0420 01:07:43.997947  123271 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0420 01:07:43.997982  123271 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:07:44.109791  123271 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0420 01:07:44.112527  123271 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0420 01:07:44.112555  123271 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0420 01:07:44.112637  123271 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0420 01:07:44.112713  123271 cache_images.go:92] duration metric: took 675.859528ms to LoadCachedImages
	W0420 01:07:44.112800  123271 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0420 01:07:44.112817  123271 kubeadm.go:928] updating node { 192.168.50.68 8443 v1.20.0 crio true true} ...
	I0420 01:07:44.112957  123271 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-345460 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-345460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:07:44.113052  123271 ssh_runner.go:195] Run: crio config
	I0420 01:07:44.178054  123271 cni.go:84] Creating CNI manager for ""
	I0420 01:07:44.178083  123271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:07:44.178108  123271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:07:44.178135  123271 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.68 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-345460 NodeName:kubernetes-upgrade-345460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0420 01:07:44.178344  123271 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-345460"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.68
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.68"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:07:44.178426  123271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0420 01:07:44.190875  123271 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:07:44.190937  123271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:07:44.205245  123271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0420 01:07:44.226574  123271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:07:44.332195  123271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0420 01:07:44.355679  123271 ssh_runner.go:195] Run: grep 192.168.50.68	control-plane.minikube.internal$ /etc/hosts
	I0420 01:07:44.360592  123271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.68	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:07:44.376278  123271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:07:44.524769  123271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:07:44.546083  123271 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460 for IP: 192.168.50.68
	I0420 01:07:44.546113  123271 certs.go:194] generating shared ca certs ...
	I0420 01:07:44.546136  123271 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:07:44.546332  123271 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:07:44.546381  123271 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:07:44.546392  123271 certs.go:256] generating profile certs ...
	I0420 01:07:44.546470  123271 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/client.key
	I0420 01:07:44.546484  123271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/client.crt with IP's: []
	I0420 01:07:44.758419  123271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/client.crt ...
	I0420 01:07:44.758458  123271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/client.crt: {Name:mkbd76d6c0a1ff47d5148bb33adc3d3beb6f6676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:07:44.758659  123271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/client.key ...
	I0420 01:07:44.758679  123271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/client.key: {Name:mk38c305a5aaeb116a6c29ac30c0708e748e0d46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:07:44.758799  123271 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/apiserver.key.514fbe5a
	I0420 01:07:44.758819  123271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/apiserver.crt.514fbe5a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.68]
	I0420 01:07:44.809937  123271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/apiserver.crt.514fbe5a ...
	I0420 01:07:44.809972  123271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/apiserver.crt.514fbe5a: {Name:mk5b7ef8d6c27577dc71475c04eee1369b3b5460 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:07:44.810155  123271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/apiserver.key.514fbe5a ...
	I0420 01:07:44.810173  123271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/apiserver.key.514fbe5a: {Name:mk698829a3699b7abb509f411b60f4e31c9a3016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:07:44.810273  123271 certs.go:381] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/apiserver.crt.514fbe5a -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/apiserver.crt
	I0420 01:07:44.810389  123271 certs.go:385] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/apiserver.key.514fbe5a -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/apiserver.key
	I0420 01:07:44.810483  123271 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/proxy-client.key
	I0420 01:07:44.810508  123271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/proxy-client.crt with IP's: []
	I0420 01:07:44.898656  123271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/proxy-client.crt ...
	I0420 01:07:44.898689  123271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/proxy-client.crt: {Name:mk2f31152f9a3031eebb71af7391da8be6a9e0d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:07:44.898849  123271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/proxy-client.key ...
	I0420 01:07:44.898863  123271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/proxy-client.key: {Name:mke861d1cf1c5a97b06bc2fdd9f57b5049c87058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:07:44.899046  123271 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:07:44.899082  123271 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:07:44.899099  123271 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:07:44.899120  123271 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:07:44.899141  123271 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:07:44.899160  123271 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:07:44.899200  123271 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:07:44.899865  123271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:07:44.930606  123271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:07:44.961599  123271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:07:44.991359  123271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:07:45.022391  123271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0420 01:07:45.052595  123271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:07:45.080923  123271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:07:45.109326  123271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:07:45.140447  123271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:07:45.172841  123271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:07:45.199721  123271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:07:45.226921  123271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:07:45.250026  123271 ssh_runner.go:195] Run: openssl version
	I0420 01:07:45.258560  123271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:07:45.274083  123271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:07:45.279781  123271 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:07:45.279847  123271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:07:45.286890  123271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:07:45.301115  123271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:07:45.315586  123271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:07:45.321729  123271 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:07:45.321791  123271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:07:45.328679  123271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:07:45.344438  123271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:07:45.362932  123271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:07:45.370306  123271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:07:45.370383  123271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:07:45.379482  123271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:07:45.393540  123271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:07:45.399016  123271 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0420 01:07:45.399106  123271 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-345460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-345460 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:07:45.399223  123271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:07:45.399294  123271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:07:45.459324  123271 cri.go:89] found id: ""
	I0420 01:07:45.459418  123271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0420 01:07:45.480309  123271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:07:45.498472  123271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:07:45.519805  123271 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:07:45.519830  123271 kubeadm.go:156] found existing configuration files:
	
	I0420 01:07:45.519890  123271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:07:45.532966  123271 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:07:45.533069  123271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:07:45.545602  123271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:07:45.559324  123271 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:07:45.559407  123271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:07:45.571565  123271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:07:45.583792  123271 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:07:45.583880  123271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:07:45.598587  123271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:07:45.612120  123271 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:07:45.612180  123271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:07:45.628098  123271 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:07:45.762643  123271 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:07:45.762717  123271 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:07:45.945568  123271 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:07:45.945741  123271 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:07:45.945897  123271 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:07:46.175034  123271 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:07:46.177519  123271 out.go:204]   - Generating certificates and keys ...
	I0420 01:07:46.177639  123271 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:07:46.177734  123271 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:07:46.394034  123271 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0420 01:07:46.648402  123271 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0420 01:07:46.765951  123271 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0420 01:07:46.937726  123271 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0420 01:07:47.126095  123271 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0420 01:07:47.126556  123271 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-345460 localhost] and IPs [192.168.50.68 127.0.0.1 ::1]
	I0420 01:07:47.301957  123271 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0420 01:07:47.302653  123271 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-345460 localhost] and IPs [192.168.50.68 127.0.0.1 ::1]
	I0420 01:07:47.542443  123271 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0420 01:07:48.194706  123271 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0420 01:07:48.306856  123271 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0420 01:07:48.307253  123271 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:07:48.363822  123271 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:07:48.552536  123271 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:07:48.630235  123271 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:07:48.879293  123271 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:07:48.898800  123271 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:07:48.899687  123271 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:07:48.899855  123271 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:07:49.052701  123271 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:07:49.054678  123271 out.go:204]   - Booting up control plane ...
	I0420 01:07:49.054816  123271 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:07:49.060880  123271 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:07:49.061983  123271 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:07:49.063580  123271 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:07:49.071100  123271 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:08:29.069898  123271 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:08:29.070803  123271 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:08:29.071089  123271 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:08:34.072244  123271 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:08:34.072540  123271 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:08:44.073449  123271 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:08:44.073778  123271 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:09:04.074982  123271 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:09:04.075246  123271 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:09:44.075680  123271 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:09:44.075893  123271 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:09:44.075924  123271 kubeadm.go:309] 
	I0420 01:09:44.076019  123271 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:09:44.076103  123271 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:09:44.076123  123271 kubeadm.go:309] 
	I0420 01:09:44.076169  123271 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:09:44.076215  123271 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:09:44.076347  123271 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:09:44.076359  123271 kubeadm.go:309] 
	I0420 01:09:44.076472  123271 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:09:44.076529  123271 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:09:44.076578  123271 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:09:44.076588  123271 kubeadm.go:309] 
	I0420 01:09:44.076749  123271 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:09:44.076871  123271 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:09:44.076882  123271 kubeadm.go:309] 
	I0420 01:09:44.076996  123271 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:09:44.077113  123271 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:09:44.077210  123271 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:09:44.077328  123271 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:09:44.077339  123271 kubeadm.go:309] 
	I0420 01:09:44.077735  123271 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:09:44.077864  123271 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:09:44.077968  123271 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0420 01:09:44.078191  123271 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-345460 localhost] and IPs [192.168.50.68 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-345460 localhost] and IPs [192.168.50.68 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-345460 localhost] and IPs [192.168.50.68 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-345460 localhost] and IPs [192.168.50.68 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0420 01:09:44.078257  123271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:09:46.056595  123271 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.978295785s)
	I0420 01:09:46.056691  123271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:09:46.078388  123271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:09:46.091212  123271 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:09:46.091234  123271 kubeadm.go:156] found existing configuration files:
	
	I0420 01:09:46.091285  123271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:09:46.103073  123271 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:09:46.103150  123271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:09:46.119879  123271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:09:46.131726  123271 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:09:46.131805  123271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:09:46.143181  123271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:09:46.158052  123271 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:09:46.158114  123271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:09:46.175130  123271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:09:46.191582  123271 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:09:46.191673  123271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:09:46.206903  123271 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:09:46.502544  123271 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:11:42.517436  123271 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:11:42.517551  123271 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0420 01:11:42.519164  123271 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:11:42.519227  123271 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:11:42.519317  123271 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:11:42.519426  123271 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:11:42.519538  123271 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:11:42.519615  123271 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:11:42.521662  123271 out.go:204]   - Generating certificates and keys ...
	I0420 01:11:42.521779  123271 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:11:42.521933  123271 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:11:42.522037  123271 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:11:42.522159  123271 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:11:42.522272  123271 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:11:42.522342  123271 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:11:42.522430  123271 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:11:42.522523  123271 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:11:42.522619  123271 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:11:42.522736  123271 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:11:42.522801  123271 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:11:42.522872  123271 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:11:42.522943  123271 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:11:42.523031  123271 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:11:42.523133  123271 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:11:42.523216  123271 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:11:42.523351  123271 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:11:42.523469  123271 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:11:42.523543  123271 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:11:42.523635  123271 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:11:42.525135  123271 out.go:204]   - Booting up control plane ...
	I0420 01:11:42.525245  123271 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:11:42.525329  123271 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:11:42.525387  123271 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:11:42.525502  123271 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:11:42.525699  123271 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:11:42.525785  123271 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:11:42.525889  123271 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:11:42.526116  123271 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:11:42.526176  123271 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:11:42.526388  123271 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:11:42.526484  123271 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:11:42.526720  123271 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:11:42.526813  123271 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:11:42.527049  123271 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:11:42.527154  123271 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:11:42.527410  123271 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:11:42.527428  123271 kubeadm.go:309] 
	I0420 01:11:42.527482  123271 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:11:42.527535  123271 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:11:42.527547  123271 kubeadm.go:309] 
	I0420 01:11:42.527598  123271 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:11:42.527645  123271 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:11:42.527777  123271 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:11:42.527786  123271 kubeadm.go:309] 
	I0420 01:11:42.527869  123271 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:11:42.527913  123271 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:11:42.527960  123271 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:11:42.527970  123271 kubeadm.go:309] 
	I0420 01:11:42.528087  123271 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:11:42.528190  123271 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:11:42.528205  123271 kubeadm.go:309] 
	I0420 01:11:42.528318  123271 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:11:42.528424  123271 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:11:42.528521  123271 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:11:42.528615  123271 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:11:42.528686  123271 kubeadm.go:393] duration metric: took 3m57.129585675s to StartCluster
	I0420 01:11:42.528730  123271 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:11:42.528793  123271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:11:42.528856  123271 kubeadm.go:309] 
	I0420 01:11:42.584367  123271 cri.go:89] found id: ""
	I0420 01:11:42.584396  123271 logs.go:276] 0 containers: []
	W0420 01:11:42.584405  123271 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:11:42.584411  123271 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:11:42.584462  123271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:11:42.632860  123271 cri.go:89] found id: ""
	I0420 01:11:42.632891  123271 logs.go:276] 0 containers: []
	W0420 01:11:42.632899  123271 logs.go:278] No container was found matching "etcd"
	I0420 01:11:42.632906  123271 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:11:42.632966  123271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:11:42.686084  123271 cri.go:89] found id: ""
	I0420 01:11:42.686122  123271 logs.go:276] 0 containers: []
	W0420 01:11:42.686133  123271 logs.go:278] No container was found matching "coredns"
	I0420 01:11:42.686141  123271 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:11:42.686205  123271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:11:42.736981  123271 cri.go:89] found id: ""
	I0420 01:11:42.737011  123271 logs.go:276] 0 containers: []
	W0420 01:11:42.737020  123271 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:11:42.737026  123271 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:11:42.737074  123271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:11:42.793877  123271 cri.go:89] found id: ""
	I0420 01:11:42.793910  123271 logs.go:276] 0 containers: []
	W0420 01:11:42.793921  123271 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:11:42.793928  123271 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:11:42.793995  123271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:11:42.844207  123271 cri.go:89] found id: ""
	I0420 01:11:42.844241  123271 logs.go:276] 0 containers: []
	W0420 01:11:42.844252  123271 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:11:42.844261  123271 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:11:42.844326  123271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:11:42.883066  123271 cri.go:89] found id: ""
	I0420 01:11:42.883093  123271 logs.go:276] 0 containers: []
	W0420 01:11:42.883101  123271 logs.go:278] No container was found matching "kindnet"
	I0420 01:11:42.883110  123271 logs.go:123] Gathering logs for container status ...
	I0420 01:11:42.883124  123271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:11:42.926336  123271 logs.go:123] Gathering logs for kubelet ...
	I0420 01:11:42.926369  123271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:11:42.979904  123271 logs.go:123] Gathering logs for dmesg ...
	I0420 01:11:42.979936  123271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:11:42.995739  123271 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:11:42.995773  123271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:11:43.131389  123271 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:11:43.131418  123271 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:11:43.131431  123271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0420 01:11:43.226887  123271 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0420 01:11:43.226941  123271 out.go:239] * 
	* 
	W0420 01:11:43.226999  123271 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:11:43.227022  123271 out.go:239] * 
	* 
	W0420 01:11:43.227874  123271 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0420 01:11:43.230768  123271 out.go:177] 
	W0420 01:11:43.232005  123271 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:11:43.232083  123271 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0420 01:11:43.232117  123271 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0420 01:11:43.233533  123271 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-345460 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-345460
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-345460: (2.309128048s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-345460 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-345460 status --format={{.Host}}: exit status 7 (76.258178ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-345460 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-345460 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.629352009s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-345460 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-345460 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-345460 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (94.452269ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-345460] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-345460
	    minikube start -p kubernetes-upgrade-345460 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3454602 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-345460 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-345460 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0420 01:13:11.658916   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-345460 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m35.348082171s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-04-20 01:14:03.814704729 +0000 UTC m=+4617.329572124
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-345460 -n kubernetes-upgrade-345460
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-345460 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-345460 logs -n 25: (2.166021753s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-831611 sudo cat                            | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC | 20 Apr 24 01:12 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo                                | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo                                | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC | 20 Apr 24 01:12 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo cat                            | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC | 20 Apr 24 01:12 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo docker                         | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo                                | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo                                | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC | 20 Apr 24 01:12 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo cat                            | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo cat                            | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC | 20 Apr 24 01:12 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo                                | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC | 20 Apr 24 01:12 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo                                | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo                                | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC | 20 Apr 24 01:12 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo cat                            | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC | 20 Apr 24 01:12 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo cat                            | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC | 20 Apr 24 01:12 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo                                | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC | 20 Apr 24 01:12 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo                                | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC | 20 Apr 24 01:12 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo                                | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC | 20 Apr 24 01:12 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo find                           | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC | 20 Apr 24 01:12 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 sudo crio                           | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC | 20 Apr 24 01:12 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p bridge-831611                                     | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC | 20 Apr 24 01:12 UTC |
	| start   | -p flannel-831611                                    | flannel-831611            | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=flannel --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-345460                         | kubernetes-upgrade-345460 | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-345460                         | kubernetes-upgrade-345460 | jenkins | v1.33.0 | 20 Apr 24 01:12 UTC | 20 Apr 24 01:14 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-831611                         | enable-default-cni-831611 | jenkins | v1.33.0 | 20 Apr 24 01:13 UTC | 20 Apr 24 01:13 UTC |
	|         | pgrep -a kubelet                                     |                           |         |         |                     |                     |
	| ssh     | -p kindnet-831611 pgrep -a                           | kindnet-831611            | jenkins | v1.33.0 | 20 Apr 24 01:13 UTC | 20 Apr 24 01:13 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 01:12:28
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 01:12:28.516576  130085 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:12:28.516678  130085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:12:28.516682  130085 out.go:304] Setting ErrFile to fd 2...
	I0420 01:12:28.516688  130085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:12:28.516895  130085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:12:28.517444  130085 out.go:298] Setting JSON to false
	I0420 01:12:28.518412  130085 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14096,"bootTime":1713561453,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 01:12:28.518474  130085 start.go:139] virtualization: kvm guest
	I0420 01:12:28.520461  130085 out.go:177] * [kubernetes-upgrade-345460] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 01:12:28.522230  130085 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:12:28.522245  130085 notify.go:220] Checking for updates...
	I0420 01:12:28.523708  130085 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:12:28.525118  130085 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:12:28.526376  130085 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:12:28.527603  130085 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 01:12:28.528862  130085 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:12:28.530538  130085 config.go:182] Loaded profile config "kubernetes-upgrade-345460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:12:28.530934  130085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:12:28.530980  130085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:12:28.545582  130085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35635
	I0420 01:12:28.545942  130085 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:12:28.546424  130085 main.go:141] libmachine: Using API Version  1
	I0420 01:12:28.546445  130085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:12:28.546743  130085 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:12:28.546916  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:12:28.547170  130085 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:12:28.547431  130085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:12:28.547463  130085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:12:28.563187  130085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34241
	I0420 01:12:28.564228  130085 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:12:28.565101  130085 main.go:141] libmachine: Using API Version  1
	I0420 01:12:28.565137  130085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:12:28.565741  130085 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:12:28.565919  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:12:28.596540  130085 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 01:12:28.597735  130085 start.go:297] selected driver: kvm2
	I0420 01:12:28.597751  130085 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-345460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-up
grade-345460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:12:28.597887  130085 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:12:28.598598  130085 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:12:28.598694  130085 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 01:12:28.614551  130085 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 01:12:28.614923  130085 cni.go:84] Creating CNI manager for ""
	I0420 01:12:28.614939  130085 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:12:28.614978  130085 start.go:340] cluster config:
	{Name:kubernetes-upgrade-345460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-345460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:12:28.615063  130085 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:12:28.616658  130085 out.go:177] * Starting "kubernetes-upgrade-345460" primary control-plane node in "kubernetes-upgrade-345460" cluster
	I0420 01:12:24.161933  129976 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:12:24.161976  129976 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0420 01:12:24.161990  129976 cache.go:56] Caching tarball of preloaded images
	I0420 01:12:24.162072  129976 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 01:12:24.162088  129976 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 01:12:24.162219  129976 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/config.json ...
	I0420 01:12:24.162246  129976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/config.json: {Name:mkfd30aed4e24f3cdb185796676015ba989698f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:12:24.162424  129976 start.go:360] acquireMachinesLock for flannel-831611: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:12:31.457135  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:31.457610  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | unable to find current IP address of domain enable-default-cni-831611 in network mk-enable-default-cni-831611
	I0420 01:12:31.457646  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | I0420 01:12:31.457567  128616 retry.go:31] will retry after 5.376991096s: waiting for machine to come up
	I0420 01:12:28.617891  130085 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:12:28.617924  130085 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0420 01:12:28.617932  130085 cache.go:56] Caching tarball of preloaded images
	I0420 01:12:28.618029  130085 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 01:12:28.618047  130085 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 01:12:28.618143  130085 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/config.json ...
	I0420 01:12:28.618327  130085 start.go:360] acquireMachinesLock for kubernetes-upgrade-345460: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:12:36.835710  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:36.836187  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has current primary IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:36.836212  128374 main.go:141] libmachine: (enable-default-cni-831611) Found IP for machine: 192.168.39.125
	I0420 01:12:36.836228  128374 main.go:141] libmachine: (enable-default-cni-831611) Reserving static IP address...
	I0420 01:12:36.836573  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-831611", mac: "52:54:00:f2:8b:de", ip: "192.168.39.125"} in network mk-enable-default-cni-831611
	I0420 01:12:36.910576  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | Getting to WaitForSSH function...
	I0420 01:12:36.910609  128374 main.go:141] libmachine: (enable-default-cni-831611) Reserved static IP address: 192.168.39.125
	I0420 01:12:36.910623  128374 main.go:141] libmachine: (enable-default-cni-831611) Waiting for SSH to be available...
	I0420 01:12:36.913394  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:36.913693  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611
	I0420 01:12:36.913727  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | unable to find defined IP address of network mk-enable-default-cni-831611 interface with MAC address 52:54:00:f2:8b:de
	I0420 01:12:36.913899  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | Using SSH client type: external
	I0420 01:12:36.913947  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/enable-default-cni-831611/id_rsa (-rw-------)
	I0420 01:12:36.913983  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/enable-default-cni-831611/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:12:36.913998  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | About to run SSH command:
	I0420 01:12:36.914015  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | exit 0
	I0420 01:12:36.917538  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | SSH cmd err, output: exit status 255: 
	I0420 01:12:36.917572  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0420 01:12:36.917588  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | command : exit 0
	I0420 01:12:36.917597  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | err     : exit status 255
	I0420 01:12:36.917605  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | output  : 
	I0420 01:12:41.406999  128503 start.go:364] duration metric: took 41.420706665s to acquireMachinesLock for "kindnet-831611"
	I0420 01:12:41.407065  128503 start.go:93] Provisioning new machine with config: &{Name:kindnet-831611 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-831611 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:12:41.407228  128503 start.go:125] createHost starting for "" (driver="kvm2")
	I0420 01:12:39.918745  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | Getting to WaitForSSH function...
	I0420 01:12:39.921490  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:39.921903  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:12:39.921943  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:39.922023  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | Using SSH client type: external
	I0420 01:12:39.922036  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/enable-default-cni-831611/id_rsa (-rw-------)
	I0420 01:12:39.922068  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/enable-default-cni-831611/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:12:39.922083  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | About to run SSH command:
	I0420 01:12:39.922094  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | exit 0
	I0420 01:12:40.045527  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | SSH cmd err, output: <nil>: 
	I0420 01:12:40.045829  128374 main.go:141] libmachine: (enable-default-cni-831611) KVM machine creation complete!
	I0420 01:12:40.046246  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetConfigRaw
	I0420 01:12:40.046894  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .DriverName
	I0420 01:12:40.047192  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .DriverName
	I0420 01:12:40.047421  128374 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0420 01:12:40.047443  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetState
	I0420 01:12:40.048700  128374 main.go:141] libmachine: Detecting operating system of created instance...
	I0420 01:12:40.048735  128374 main.go:141] libmachine: Waiting for SSH to be available...
	I0420 01:12:40.048740  128374 main.go:141] libmachine: Getting to WaitForSSH function...
	I0420 01:12:40.048747  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHHostname
	I0420 01:12:40.050971  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:40.051273  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:12:40.051303  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:40.051395  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHPort
	I0420 01:12:40.051579  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:12:40.051706  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:12:40.051839  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHUsername
	I0420 01:12:40.051997  128374 main.go:141] libmachine: Using SSH client type: native
	I0420 01:12:40.052238  128374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0420 01:12:40.052249  128374 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0420 01:12:40.152894  128374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:12:40.152919  128374 main.go:141] libmachine: Detecting the provisioner...
	I0420 01:12:40.152929  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHHostname
	I0420 01:12:40.155838  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:40.156198  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:12:40.156237  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:40.156374  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHPort
	I0420 01:12:40.156599  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:12:40.156795  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:12:40.156989  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHUsername
	I0420 01:12:40.157188  128374 main.go:141] libmachine: Using SSH client type: native
	I0420 01:12:40.157385  128374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0420 01:12:40.157397  128374 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0420 01:12:40.258843  128374 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0420 01:12:40.258907  128374 main.go:141] libmachine: found compatible host: buildroot
	I0420 01:12:40.258914  128374 main.go:141] libmachine: Provisioning with buildroot...
	I0420 01:12:40.258923  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetMachineName
	I0420 01:12:40.259199  128374 buildroot.go:166] provisioning hostname "enable-default-cni-831611"
	I0420 01:12:40.259235  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetMachineName
	I0420 01:12:40.259465  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHHostname
	I0420 01:12:40.262028  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:40.262440  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:12:40.262469  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:40.262604  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHPort
	I0420 01:12:40.262816  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:12:40.262989  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:12:40.263143  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHUsername
	I0420 01:12:40.263344  128374 main.go:141] libmachine: Using SSH client type: native
	I0420 01:12:40.263553  128374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0420 01:12:40.263573  128374 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-831611 && echo "enable-default-cni-831611" | sudo tee /etc/hostname
	I0420 01:12:40.383605  128374 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-831611
	
	I0420 01:12:40.383638  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHHostname
	I0420 01:12:40.386394  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:40.386731  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:12:40.386759  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:40.386900  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHPort
	I0420 01:12:40.387079  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:12:40.387222  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:12:40.387361  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHUsername
	I0420 01:12:40.387510  128374 main.go:141] libmachine: Using SSH client type: native
	I0420 01:12:40.387713  128374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0420 01:12:40.387731  128374 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-831611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-831611/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-831611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:12:40.499745  128374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:12:40.499773  128374 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:12:40.499808  128374 buildroot.go:174] setting up certificates
	I0420 01:12:40.499819  128374 provision.go:84] configureAuth start
	I0420 01:12:40.499834  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetMachineName
	I0420 01:12:40.500136  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetIP
	I0420 01:12:40.502683  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:40.503030  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:12:40.503062  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:40.503177  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHHostname
	I0420 01:12:40.505286  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:40.505577  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:12:40.505602  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:40.505763  128374 provision.go:143] copyHostCerts
	I0420 01:12:40.505829  128374 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:12:40.505839  128374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:12:40.505890  128374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:12:40.505982  128374 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:12:40.505993  128374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:12:40.506012  128374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:12:40.506083  128374 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:12:40.506090  128374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:12:40.506106  128374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:12:40.506162  128374 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-831611 san=[127.0.0.1 192.168.39.125 enable-default-cni-831611 localhost minikube]
	I0420 01:12:40.668257  128374 provision.go:177] copyRemoteCerts
	I0420 01:12:40.668320  128374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:12:40.668346  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHHostname
	I0420 01:12:40.671414  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:40.671804  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:12:40.671844  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:40.672067  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHPort
	I0420 01:12:40.672259  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:12:40.672420  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHUsername
	I0420 01:12:40.672597  128374 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/enable-default-cni-831611/id_rsa Username:docker}
	I0420 01:12:40.761251  128374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:12:40.793466  128374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:12:40.823114  128374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0420 01:12:40.855029  128374 provision.go:87] duration metric: took 355.191979ms to configureAuth
	I0420 01:12:40.855083  128374 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:12:40.855251  128374 config.go:182] Loaded profile config "enable-default-cni-831611": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:12:40.855324  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHHostname
	I0420 01:12:40.858200  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:40.858544  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:12:40.858567  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:40.858776  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHPort
	I0420 01:12:40.859008  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:12:40.859190  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:12:40.859341  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHUsername
	I0420 01:12:40.859542  128374 main.go:141] libmachine: Using SSH client type: native
	I0420 01:12:40.859768  128374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0420 01:12:40.859794  128374 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:12:41.150984  128374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:12:41.151047  128374 main.go:141] libmachine: Checking connection to Docker...
	I0420 01:12:41.151061  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetURL
	I0420 01:12:41.152435  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | Using libvirt version 6000000
	I0420 01:12:41.154633  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:41.155009  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:12:41.155045  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:41.155180  128374 main.go:141] libmachine: Docker is up and running!
	I0420 01:12:41.155195  128374 main.go:141] libmachine: Reticulating splines...
	I0420 01:12:41.155203  128374 client.go:171] duration metric: took 30.128265281s to LocalClient.Create
	I0420 01:12:41.155228  128374 start.go:167] duration metric: took 30.128333266s to libmachine.API.Create "enable-default-cni-831611"
	I0420 01:12:41.155240  128374 start.go:293] postStartSetup for "enable-default-cni-831611" (driver="kvm2")
	I0420 01:12:41.155254  128374 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:12:41.155276  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .DriverName
	I0420 01:12:41.155567  128374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:12:41.155594  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHHostname
	I0420 01:12:41.157700  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:41.158062  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:12:41.158097  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:41.158200  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHPort
	I0420 01:12:41.158380  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:12:41.158537  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHUsername
	I0420 01:12:41.158686  128374 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/enable-default-cni-831611/id_rsa Username:docker}
	I0420 01:12:41.240772  128374 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:12:41.246001  128374 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:12:41.246030  128374 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:12:41.246111  128374 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:12:41.246207  128374 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:12:41.246344  128374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:12:41.256760  128374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:12:41.289701  128374 start.go:296] duration metric: took 134.444303ms for postStartSetup
	I0420 01:12:41.289760  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetConfigRaw
	I0420 01:12:41.290479  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetIP
	I0420 01:12:41.293507  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:41.293998  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:12:41.294029  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:41.294299  128374 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/config.json ...
	I0420 01:12:41.294568  128374 start.go:128] duration metric: took 30.287682073s to createHost
	I0420 01:12:41.294601  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHHostname
	I0420 01:12:41.297424  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:41.297782  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:12:41.297818  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:41.297978  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHPort
	I0420 01:12:41.298211  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:12:41.298378  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:12:41.298555  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHUsername
	I0420 01:12:41.298734  128374 main.go:141] libmachine: Using SSH client type: native
	I0420 01:12:41.298958  128374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0420 01:12:41.298974  128374 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:12:41.406820  128374 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713575561.391243922
	
	I0420 01:12:41.406846  128374 fix.go:216] guest clock: 1713575561.391243922
	I0420 01:12:41.406856  128374 fix.go:229] Guest: 2024-04-20 01:12:41.391243922 +0000 UTC Remote: 2024-04-20 01:12:41.294585744 +0000 UTC m=+42.980696085 (delta=96.658178ms)
	I0420 01:12:41.406887  128374 fix.go:200] guest clock delta is within tolerance: 96.658178ms
	I0420 01:12:41.406901  128374 start.go:83] releasing machines lock for "enable-default-cni-831611", held for 30.400263435s
	I0420 01:12:41.406933  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .DriverName
	I0420 01:12:41.407241  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetIP
	I0420 01:12:41.410358  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:41.410779  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:12:41.410810  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:41.411106  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .DriverName
	I0420 01:12:41.411730  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .DriverName
	I0420 01:12:41.411917  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .DriverName
	I0420 01:12:41.412003  128374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:12:41.412047  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHHostname
	I0420 01:12:41.412090  128374 ssh_runner.go:195] Run: cat /version.json
	I0420 01:12:41.412110  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHHostname
	I0420 01:12:41.414937  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:41.415077  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:41.415355  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:12:41.415388  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:41.415460  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:12:41.415506  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:41.415640  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHPort
	I0420 01:12:41.415702  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHPort
	I0420 01:12:41.415861  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:12:41.415875  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:12:41.416007  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHUsername
	I0420 01:12:41.416063  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHUsername
	I0420 01:12:41.416160  128374 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/enable-default-cni-831611/id_rsa Username:docker}
	I0420 01:12:41.416222  128374 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/enable-default-cni-831611/id_rsa Username:docker}
	I0420 01:12:41.504406  128374 ssh_runner.go:195] Run: systemctl --version
	I0420 01:12:41.534648  128374 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:12:41.719221  128374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:12:41.728324  128374 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:12:41.728411  128374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:12:41.753035  128374 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:12:41.753066  128374 start.go:494] detecting cgroup driver to use...
	I0420 01:12:41.753159  128374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:12:41.779613  128374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:12:41.798560  128374 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:12:41.798647  128374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:12:41.817217  128374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:12:41.835742  128374 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:12:41.986713  128374 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:12:42.183048  128374 docker.go:233] disabling docker service ...
	I0420 01:12:42.183125  128374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:12:42.206595  128374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:12:42.223658  128374 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:12:42.390313  128374 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:12:42.528604  128374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:12:42.545548  128374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:12:42.566281  128374 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:12:42.566354  128374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:12:42.578397  128374 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:12:42.578453  128374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:12:42.592105  128374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:12:42.605403  128374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:12:42.618211  128374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:12:42.632359  128374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:12:42.647633  128374 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:12:42.670087  128374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:12:42.682895  128374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:12:42.696366  128374 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:12:42.696418  128374 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:12:42.712931  128374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:12:42.724027  128374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:12:42.874653  128374 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:12:43.030667  128374 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:12:43.030754  128374 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:12:43.036402  128374 start.go:562] Will wait 60s for crictl version
	I0420 01:12:43.036448  128374 ssh_runner.go:195] Run: which crictl
	I0420 01:12:43.041755  128374 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:12:43.086560  128374 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:12:43.086652  128374 ssh_runner.go:195] Run: crio --version
	I0420 01:12:43.121422  128374 ssh_runner.go:195] Run: crio --version
	I0420 01:12:43.154870  128374 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:12:43.156240  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetIP
	I0420 01:12:43.159167  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:43.159588  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:12:43.159622  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:12:43.159906  128374 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0420 01:12:43.164591  128374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:12:43.179862  128374 kubeadm.go:877] updating cluster {Name:enable-default-cni-831611 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-831611 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:12:43.179999  128374 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:12:43.180068  128374 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:12:43.238542  128374 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:12:43.238643  128374 ssh_runner.go:195] Run: which lz4
	I0420 01:12:43.245065  128374 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:12:43.250489  128374 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:12:43.250525  128374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:12:41.409400  128503 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0420 01:12:41.409673  128503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:12:41.409727  128503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:12:41.426889  128503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34031
	I0420 01:12:41.427422  128503 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:12:41.428015  128503 main.go:141] libmachine: Using API Version  1
	I0420 01:12:41.428044  128503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:12:41.428452  128503 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:12:41.428643  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetMachineName
	I0420 01:12:41.428804  128503 main.go:141] libmachine: (kindnet-831611) Calling .DriverName
	I0420 01:12:41.428964  128503 start.go:159] libmachine.API.Create for "kindnet-831611" (driver="kvm2")
	I0420 01:12:41.428993  128503 client.go:168] LocalClient.Create starting
	I0420 01:12:41.429024  128503 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem
	I0420 01:12:41.429060  128503 main.go:141] libmachine: Decoding PEM data...
	I0420 01:12:41.429077  128503 main.go:141] libmachine: Parsing certificate...
	I0420 01:12:41.429147  128503 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem
	I0420 01:12:41.429174  128503 main.go:141] libmachine: Decoding PEM data...
	I0420 01:12:41.429196  128503 main.go:141] libmachine: Parsing certificate...
	I0420 01:12:41.429223  128503 main.go:141] libmachine: Running pre-create checks...
	I0420 01:12:41.429244  128503 main.go:141] libmachine: (kindnet-831611) Calling .PreCreateCheck
	I0420 01:12:41.429652  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetConfigRaw
	I0420 01:12:41.430123  128503 main.go:141] libmachine: Creating machine...
	I0420 01:12:41.430143  128503 main.go:141] libmachine: (kindnet-831611) Calling .Create
	I0420 01:12:41.430308  128503 main.go:141] libmachine: (kindnet-831611) Creating KVM machine...
	I0420 01:12:41.431476  128503 main.go:141] libmachine: (kindnet-831611) DBG | found existing default KVM network
	I0420 01:12:41.432500  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:41.432330  130200 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:8b:5c:06} reservation:<nil>}
	I0420 01:12:41.433128  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:41.433035  130200 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b0:02:92} reservation:<nil>}
	I0420 01:12:41.434184  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:41.434094  130200 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000304cb0}
	I0420 01:12:41.434212  128503 main.go:141] libmachine: (kindnet-831611) DBG | created network xml: 
	I0420 01:12:41.434223  128503 main.go:141] libmachine: (kindnet-831611) DBG | <network>
	I0420 01:12:41.434233  128503 main.go:141] libmachine: (kindnet-831611) DBG |   <name>mk-kindnet-831611</name>
	I0420 01:12:41.434244  128503 main.go:141] libmachine: (kindnet-831611) DBG |   <dns enable='no'/>
	I0420 01:12:41.434257  128503 main.go:141] libmachine: (kindnet-831611) DBG |   
	I0420 01:12:41.434268  128503 main.go:141] libmachine: (kindnet-831611) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0420 01:12:41.434281  128503 main.go:141] libmachine: (kindnet-831611) DBG |     <dhcp>
	I0420 01:12:41.434302  128503 main.go:141] libmachine: (kindnet-831611) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0420 01:12:41.434315  128503 main.go:141] libmachine: (kindnet-831611) DBG |     </dhcp>
	I0420 01:12:41.434324  128503 main.go:141] libmachine: (kindnet-831611) DBG |   </ip>
	I0420 01:12:41.434333  128503 main.go:141] libmachine: (kindnet-831611) DBG |   
	I0420 01:12:41.434342  128503 main.go:141] libmachine: (kindnet-831611) DBG | </network>
	I0420 01:12:41.434352  128503 main.go:141] libmachine: (kindnet-831611) DBG | 
	I0420 01:12:41.440560  128503 main.go:141] libmachine: (kindnet-831611) DBG | trying to create private KVM network mk-kindnet-831611 192.168.61.0/24...
	I0420 01:12:41.519461  128503 main.go:141] libmachine: (kindnet-831611) DBG | private KVM network mk-kindnet-831611 192.168.61.0/24 created
	I0420 01:12:41.519561  128503 main.go:141] libmachine: (kindnet-831611) Setting up store path in /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611 ...
	I0420 01:12:41.519730  128503 main.go:141] libmachine: (kindnet-831611) Building disk image from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0420 01:12:41.519788  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:41.519687  130200 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:12:41.519929  128503 main.go:141] libmachine: (kindnet-831611) Downloading /home/jenkins/minikube-integration/18703-76456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0420 01:12:41.760605  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:41.760462  130200 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611/id_rsa...
	I0420 01:12:41.955598  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:41.955433  130200 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611/kindnet-831611.rawdisk...
	I0420 01:12:41.955661  128503 main.go:141] libmachine: (kindnet-831611) DBG | Writing magic tar header
	I0420 01:12:41.955678  128503 main.go:141] libmachine: (kindnet-831611) DBG | Writing SSH key tar header
	I0420 01:12:41.955691  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:41.955553  130200 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611 ...
	I0420 01:12:41.955719  128503 main.go:141] libmachine: (kindnet-831611) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611
	I0420 01:12:41.955753  128503 main.go:141] libmachine: (kindnet-831611) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines
	I0420 01:12:41.955767  128503 main.go:141] libmachine: (kindnet-831611) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611 (perms=drwx------)
	I0420 01:12:41.955781  128503 main.go:141] libmachine: (kindnet-831611) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines (perms=drwxr-xr-x)
	I0420 01:12:41.955799  128503 main.go:141] libmachine: (kindnet-831611) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube (perms=drwxr-xr-x)
	I0420 01:12:41.955812  128503 main.go:141] libmachine: (kindnet-831611) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456 (perms=drwxrwxr-x)
	I0420 01:12:41.955826  128503 main.go:141] libmachine: (kindnet-831611) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0420 01:12:41.955837  128503 main.go:141] libmachine: (kindnet-831611) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:12:41.955875  128503 main.go:141] libmachine: (kindnet-831611) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456
	I0420 01:12:41.955901  128503 main.go:141] libmachine: (kindnet-831611) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0420 01:12:41.955910  128503 main.go:141] libmachine: (kindnet-831611) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0420 01:12:41.955920  128503 main.go:141] libmachine: (kindnet-831611) DBG | Checking permissions on dir: /home/jenkins
	I0420 01:12:41.955928  128503 main.go:141] libmachine: (kindnet-831611) DBG | Checking permissions on dir: /home
	I0420 01:12:41.955939  128503 main.go:141] libmachine: (kindnet-831611) DBG | Skipping /home - not owner
	I0420 01:12:41.955951  128503 main.go:141] libmachine: (kindnet-831611) Creating domain...
	I0420 01:12:41.957176  128503 main.go:141] libmachine: (kindnet-831611) define libvirt domain using xml: 
	I0420 01:12:41.957201  128503 main.go:141] libmachine: (kindnet-831611) <domain type='kvm'>
	I0420 01:12:41.957211  128503 main.go:141] libmachine: (kindnet-831611)   <name>kindnet-831611</name>
	I0420 01:12:41.957223  128503 main.go:141] libmachine: (kindnet-831611)   <memory unit='MiB'>3072</memory>
	I0420 01:12:41.957232  128503 main.go:141] libmachine: (kindnet-831611)   <vcpu>2</vcpu>
	I0420 01:12:41.957243  128503 main.go:141] libmachine: (kindnet-831611)   <features>
	I0420 01:12:41.957252  128503 main.go:141] libmachine: (kindnet-831611)     <acpi/>
	I0420 01:12:41.957260  128503 main.go:141] libmachine: (kindnet-831611)     <apic/>
	I0420 01:12:41.957287  128503 main.go:141] libmachine: (kindnet-831611)     <pae/>
	I0420 01:12:41.957330  128503 main.go:141] libmachine: (kindnet-831611)     
	I0420 01:12:41.957366  128503 main.go:141] libmachine: (kindnet-831611)   </features>
	I0420 01:12:41.957388  128503 main.go:141] libmachine: (kindnet-831611)   <cpu mode='host-passthrough'>
	I0420 01:12:41.957403  128503 main.go:141] libmachine: (kindnet-831611)   
	I0420 01:12:41.957437  128503 main.go:141] libmachine: (kindnet-831611)   </cpu>
	I0420 01:12:41.957451  128503 main.go:141] libmachine: (kindnet-831611)   <os>
	I0420 01:12:41.957462  128503 main.go:141] libmachine: (kindnet-831611)     <type>hvm</type>
	I0420 01:12:41.957473  128503 main.go:141] libmachine: (kindnet-831611)     <boot dev='cdrom'/>
	I0420 01:12:41.957483  128503 main.go:141] libmachine: (kindnet-831611)     <boot dev='hd'/>
	I0420 01:12:41.957503  128503 main.go:141] libmachine: (kindnet-831611)     <bootmenu enable='no'/>
	I0420 01:12:41.957520  128503 main.go:141] libmachine: (kindnet-831611)   </os>
	I0420 01:12:41.957550  128503 main.go:141] libmachine: (kindnet-831611)   <devices>
	I0420 01:12:41.957574  128503 main.go:141] libmachine: (kindnet-831611)     <disk type='file' device='cdrom'>
	I0420 01:12:41.957600  128503 main.go:141] libmachine: (kindnet-831611)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611/boot2docker.iso'/>
	I0420 01:12:41.957611  128503 main.go:141] libmachine: (kindnet-831611)       <target dev='hdc' bus='scsi'/>
	I0420 01:12:41.957625  128503 main.go:141] libmachine: (kindnet-831611)       <readonly/>
	I0420 01:12:41.957635  128503 main.go:141] libmachine: (kindnet-831611)     </disk>
	I0420 01:12:41.957643  128503 main.go:141] libmachine: (kindnet-831611)     <disk type='file' device='disk'>
	I0420 01:12:41.957655  128503 main.go:141] libmachine: (kindnet-831611)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0420 01:12:41.957670  128503 main.go:141] libmachine: (kindnet-831611)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611/kindnet-831611.rawdisk'/>
	I0420 01:12:41.957690  128503 main.go:141] libmachine: (kindnet-831611)       <target dev='hda' bus='virtio'/>
	I0420 01:12:41.957703  128503 main.go:141] libmachine: (kindnet-831611)     </disk>
	I0420 01:12:41.957714  128503 main.go:141] libmachine: (kindnet-831611)     <interface type='network'>
	I0420 01:12:41.957727  128503 main.go:141] libmachine: (kindnet-831611)       <source network='mk-kindnet-831611'/>
	I0420 01:12:41.957738  128503 main.go:141] libmachine: (kindnet-831611)       <model type='virtio'/>
	I0420 01:12:41.957746  128503 main.go:141] libmachine: (kindnet-831611)     </interface>
	I0420 01:12:41.957757  128503 main.go:141] libmachine: (kindnet-831611)     <interface type='network'>
	I0420 01:12:41.957779  128503 main.go:141] libmachine: (kindnet-831611)       <source network='default'/>
	I0420 01:12:41.957791  128503 main.go:141] libmachine: (kindnet-831611)       <model type='virtio'/>
	I0420 01:12:41.957802  128503 main.go:141] libmachine: (kindnet-831611)     </interface>
	I0420 01:12:41.957809  128503 main.go:141] libmachine: (kindnet-831611)     <serial type='pty'>
	I0420 01:12:41.957818  128503 main.go:141] libmachine: (kindnet-831611)       <target port='0'/>
	I0420 01:12:41.957828  128503 main.go:141] libmachine: (kindnet-831611)     </serial>
	I0420 01:12:41.957836  128503 main.go:141] libmachine: (kindnet-831611)     <console type='pty'>
	I0420 01:12:41.957846  128503 main.go:141] libmachine: (kindnet-831611)       <target type='serial' port='0'/>
	I0420 01:12:41.957854  128503 main.go:141] libmachine: (kindnet-831611)     </console>
	I0420 01:12:41.957864  128503 main.go:141] libmachine: (kindnet-831611)     <rng model='virtio'>
	I0420 01:12:41.957875  128503 main.go:141] libmachine: (kindnet-831611)       <backend model='random'>/dev/random</backend>
	I0420 01:12:41.957885  128503 main.go:141] libmachine: (kindnet-831611)     </rng>
	I0420 01:12:41.957893  128503 main.go:141] libmachine: (kindnet-831611)     
	I0420 01:12:41.957902  128503 main.go:141] libmachine: (kindnet-831611)     
	I0420 01:12:41.957910  128503 main.go:141] libmachine: (kindnet-831611)   </devices>
	I0420 01:12:41.957919  128503 main.go:141] libmachine: (kindnet-831611) </domain>
	I0420 01:12:41.957929  128503 main.go:141] libmachine: (kindnet-831611) 
	I0420 01:12:41.962078  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:f9:fe:69 in network default
	I0420 01:12:41.962750  128503 main.go:141] libmachine: (kindnet-831611) Ensuring networks are active...
	I0420 01:12:41.962772  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:12:41.963470  128503 main.go:141] libmachine: (kindnet-831611) Ensuring network default is active
	I0420 01:12:41.963894  128503 main.go:141] libmachine: (kindnet-831611) Ensuring network mk-kindnet-831611 is active
	I0420 01:12:41.964652  128503 main.go:141] libmachine: (kindnet-831611) Getting domain xml...
	I0420 01:12:41.965440  128503 main.go:141] libmachine: (kindnet-831611) Creating domain...
	I0420 01:12:43.291723  128503 main.go:141] libmachine: (kindnet-831611) Waiting to get IP...
	I0420 01:12:43.292372  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:12:43.292817  128503 main.go:141] libmachine: (kindnet-831611) DBG | unable to find current IP address of domain kindnet-831611 in network mk-kindnet-831611
	I0420 01:12:43.292840  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:43.292787  130200 retry.go:31] will retry after 259.947909ms: waiting for machine to come up
	I0420 01:12:43.554457  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:12:43.555089  128503 main.go:141] libmachine: (kindnet-831611) DBG | unable to find current IP address of domain kindnet-831611 in network mk-kindnet-831611
	I0420 01:12:43.555117  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:43.555052  130200 retry.go:31] will retry after 347.821822ms: waiting for machine to come up
	I0420 01:12:43.904614  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:12:43.905317  128503 main.go:141] libmachine: (kindnet-831611) DBG | unable to find current IP address of domain kindnet-831611 in network mk-kindnet-831611
	I0420 01:12:43.905346  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:43.905270  130200 retry.go:31] will retry after 458.443068ms: waiting for machine to come up
	I0420 01:12:44.365665  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:12:44.366171  128503 main.go:141] libmachine: (kindnet-831611) DBG | unable to find current IP address of domain kindnet-831611 in network mk-kindnet-831611
	I0420 01:12:44.366201  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:44.366114  130200 retry.go:31] will retry after 600.059559ms: waiting for machine to come up
	I0420 01:12:44.957770  128374 crio.go:462] duration metric: took 1.712731075s to copy over tarball
	I0420 01:12:44.957882  128374 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:12:47.650609  128374 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.692681406s)
	I0420 01:12:47.650648  128374 crio.go:469] duration metric: took 2.692841114s to extract the tarball
	I0420 01:12:47.650658  128374 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:12:47.690556  128374 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:12:47.741592  128374 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:12:47.741625  128374 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:12:47.741635  128374 kubeadm.go:928] updating node { 192.168.39.125 8443 v1.30.0 crio true true} ...
	I0420 01:12:47.741766  128374 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-831611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-831611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0420 01:12:47.741867  128374 ssh_runner.go:195] Run: crio config
	I0420 01:12:47.807775  128374 cni.go:84] Creating CNI manager for "bridge"
	I0420 01:12:47.807815  128374 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:12:47.807847  128374 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-831611 NodeName:enable-default-cni-831611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:12:47.808025  128374 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-831611"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:12:47.808107  128374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:12:47.821045  128374 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:12:47.821117  128374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:12:47.833410  128374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0420 01:12:47.853787  128374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:12:47.873480  128374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0420 01:12:47.893302  128374 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0420 01:12:47.898059  128374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:12:47.913527  128374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:12:48.040029  128374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:12:48.067557  128374 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611 for IP: 192.168.39.125
	I0420 01:12:48.067588  128374 certs.go:194] generating shared ca certs ...
	I0420 01:12:48.067612  128374 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:12:48.067814  128374 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:12:48.067879  128374 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:12:48.067894  128374 certs.go:256] generating profile certs ...
	I0420 01:12:48.067966  128374 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.key
	I0420 01:12:48.067987  128374 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt with IP's: []
	I0420 01:12:48.255660  128374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt ...
	I0420 01:12:48.255699  128374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: {Name:mk696a7593622bb3401d3fa7dc4932887b57a2f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:12:48.255925  128374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.key ...
	I0420 01:12:48.255950  128374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.key: {Name:mk4dd68301a5525a8ce8e5d3cb6ae38052ed2350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:12:48.256085  128374 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/apiserver.key.9296ab3e
	I0420 01:12:48.256114  128374 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/apiserver.crt.9296ab3e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.125]
	I0420 01:12:48.349172  128374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/apiserver.crt.9296ab3e ...
	I0420 01:12:48.349204  128374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/apiserver.crt.9296ab3e: {Name:mk52ff86f7cb1f7c47141f6db5d4c2088664bba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:12:48.349389  128374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/apiserver.key.9296ab3e ...
	I0420 01:12:48.349409  128374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/apiserver.key.9296ab3e: {Name:mka49ba82c33c4713750772d3f097799124e58b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:12:48.349504  128374 certs.go:381] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/apiserver.crt.9296ab3e -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/apiserver.crt
	I0420 01:12:48.349604  128374 certs.go:385] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/apiserver.key.9296ab3e -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/apiserver.key
	I0420 01:12:48.349678  128374 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/proxy-client.key
	I0420 01:12:48.349701  128374 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/proxy-client.crt with IP's: []
	I0420 01:12:48.573451  128374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/proxy-client.crt ...
	I0420 01:12:48.573487  128374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/proxy-client.crt: {Name:mkb274c30e6e4411393625cdd1f72074fc25a787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:12:48.573698  128374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/proxy-client.key ...
	I0420 01:12:48.573720  128374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/proxy-client.key: {Name:mk72504f47944f68bf9f9d8bbfbc8bce9d42e390 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:12:48.573975  128374 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:12:48.574026  128374 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:12:48.574050  128374 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:12:48.574106  128374 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:12:48.574146  128374 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:12:48.574181  128374 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:12:48.574250  128374 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:12:48.574952  128374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:12:48.606361  128374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:12:48.635228  128374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:12:48.667605  128374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:12:48.699113  128374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0420 01:12:48.728477  128374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:12:48.760604  128374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:12:48.793052  128374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0420 01:12:48.819524  128374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:12:48.849111  128374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:12:48.877152  128374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:12:48.907336  128374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:12:48.927383  128374 ssh_runner.go:195] Run: openssl version
	I0420 01:12:48.933393  128374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:12:48.945013  128374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:12:48.950160  128374 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:12:48.950215  128374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:12:48.956251  128374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:12:48.967485  128374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:12:48.978951  128374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:12:48.983977  128374 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:12:48.984037  128374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:12:48.990247  128374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:12:49.005816  128374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:12:49.018862  128374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:12:49.025139  128374 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:12:49.025207  128374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:12:49.033384  128374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:12:49.045613  128374 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:12:49.050581  128374 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0420 01:12:49.050633  128374 kubeadm.go:391] StartCluster: {Name:enable-default-cni-831611 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-831611 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:12:49.050714  128374 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:12:49.050750  128374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:12:49.097128  128374 cri.go:89] found id: ""
	I0420 01:12:49.097212  128374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0420 01:12:49.108457  128374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:12:49.119184  128374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:12:49.129725  128374 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:12:49.129744  128374 kubeadm.go:156] found existing configuration files:
	
	I0420 01:12:49.129786  128374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:12:49.139504  128374 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:12:49.139557  128374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:12:49.149563  128374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:12:49.162373  128374 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:12:49.162492  128374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:12:49.176593  128374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:12:49.186694  128374 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:12:49.186753  128374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:12:49.197564  128374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:12:49.208182  128374 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:12:49.208244  128374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:12:49.218914  128374 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:12:49.274059  128374 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:12:49.274122  128374 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:12:49.400135  128374 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:12:49.400326  128374 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:12:49.400470  128374 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:12:49.654618  128374 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:12:44.967716  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:12:44.990618  128503 main.go:141] libmachine: (kindnet-831611) DBG | unable to find current IP address of domain kindnet-831611 in network mk-kindnet-831611
	I0420 01:12:44.990653  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:44.968064  130200 retry.go:31] will retry after 739.196015ms: waiting for machine to come up
	I0420 01:12:45.709139  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:12:45.709899  128503 main.go:141] libmachine: (kindnet-831611) DBG | unable to find current IP address of domain kindnet-831611 in network mk-kindnet-831611
	I0420 01:12:45.709939  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:45.709834  130200 retry.go:31] will retry after 733.461273ms: waiting for machine to come up
	I0420 01:12:46.445188  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:12:46.445603  128503 main.go:141] libmachine: (kindnet-831611) DBG | unable to find current IP address of domain kindnet-831611 in network mk-kindnet-831611
	I0420 01:12:46.445640  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:46.445546  130200 retry.go:31] will retry after 1.162438938s: waiting for machine to come up
	I0420 01:12:47.609420  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:12:47.609979  128503 main.go:141] libmachine: (kindnet-831611) DBG | unable to find current IP address of domain kindnet-831611 in network mk-kindnet-831611
	I0420 01:12:47.610008  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:47.609897  130200 retry.go:31] will retry after 1.359062531s: waiting for machine to come up
	I0420 01:12:48.971158  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:12:48.971663  128503 main.go:141] libmachine: (kindnet-831611) DBG | unable to find current IP address of domain kindnet-831611 in network mk-kindnet-831611
	I0420 01:12:48.971693  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:48.971620  130200 retry.go:31] will retry after 1.555779486s: waiting for machine to come up
	I0420 01:12:49.680038  128374 out.go:204]   - Generating certificates and keys ...
	I0420 01:12:49.680160  128374 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:12:49.680252  128374 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:12:49.728539  128374 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0420 01:12:49.832274  128374 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0420 01:12:49.933119  128374 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0420 01:12:50.039191  128374 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0420 01:12:50.163468  128374 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0420 01:12:50.163811  128374 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-831611 localhost] and IPs [192.168.39.125 127.0.0.1 ::1]
	I0420 01:12:50.655951  128374 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0420 01:12:50.656380  128374 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-831611 localhost] and IPs [192.168.39.125 127.0.0.1 ::1]
	I0420 01:12:50.861587  128374 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0420 01:12:50.928958  128374 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0420 01:12:51.135383  128374 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0420 01:12:51.135841  128374 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:12:51.294337  128374 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:12:51.475154  128374 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:12:51.538269  128374 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:12:51.661535  128374 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:12:51.760176  128374 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:12:51.760792  128374 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:12:51.763959  128374 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:12:51.766191  128374 out.go:204]   - Booting up control plane ...
	I0420 01:12:51.766314  128374 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:12:51.766549  128374 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:12:51.767436  128374 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:12:51.783949  128374 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:12:51.784885  128374 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:12:51.784994  128374 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:12:51.924808  128374 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:12:51.924935  128374 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:12:52.425469  128374 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 500.871942ms
	I0420 01:12:52.425608  128374 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:12:50.528812  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:12:50.529374  128503 main.go:141] libmachine: (kindnet-831611) DBG | unable to find current IP address of domain kindnet-831611 in network mk-kindnet-831611
	I0420 01:12:50.529406  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:50.529303  130200 retry.go:31] will retry after 1.49853074s: waiting for machine to come up
	I0420 01:12:52.029399  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:12:52.029940  128503 main.go:141] libmachine: (kindnet-831611) DBG | unable to find current IP address of domain kindnet-831611 in network mk-kindnet-831611
	I0420 01:12:52.029968  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:52.029892  130200 retry.go:31] will retry after 2.637332865s: waiting for machine to come up
	I0420 01:12:54.670354  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:12:54.670765  128503 main.go:141] libmachine: (kindnet-831611) DBG | unable to find current IP address of domain kindnet-831611 in network mk-kindnet-831611
	I0420 01:12:54.670793  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:54.670713  130200 retry.go:31] will retry after 2.826286493s: waiting for machine to come up
	I0420 01:12:57.928657  128374 kubeadm.go:309] [api-check] The API server is healthy after 5.502811377s
	I0420 01:12:57.945119  128374 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:12:57.967440  128374 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:12:58.002247  128374 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:12:58.002474  128374 kubeadm.go:309] [mark-control-plane] Marking the node enable-default-cni-831611 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:12:58.015241  128374 kubeadm.go:309] [bootstrap-token] Using token: vdqi9x.vzzfi3lwb5bpbna7
	I0420 01:12:58.016620  128374 out.go:204]   - Configuring RBAC rules ...
	I0420 01:12:58.016739  128374 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:12:58.027404  128374 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:12:58.034760  128374 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:12:58.042085  128374 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:12:58.045160  128374 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:12:58.048089  128374 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:12:58.334144  128374 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:12:58.801860  128374 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:12:59.333805  128374 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:12:59.334622  128374 kubeadm.go:309] 
	I0420 01:12:59.334736  128374 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:12:59.334759  128374 kubeadm.go:309] 
	I0420 01:12:59.334884  128374 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:12:59.334897  128374 kubeadm.go:309] 
	I0420 01:12:59.334932  128374 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:12:59.335030  128374 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:12:59.335134  128374 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:12:59.335153  128374 kubeadm.go:309] 
	I0420 01:12:59.335211  128374 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:12:59.335224  128374 kubeadm.go:309] 
	I0420 01:12:59.335266  128374 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:12:59.335273  128374 kubeadm.go:309] 
	I0420 01:12:59.335315  128374 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:12:59.335414  128374 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:12:59.335516  128374 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:12:59.335524  128374 kubeadm.go:309] 
	I0420 01:12:59.335592  128374 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:12:59.335690  128374 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:12:59.335702  128374 kubeadm.go:309] 
	I0420 01:12:59.335814  128374 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token vdqi9x.vzzfi3lwb5bpbna7 \
	I0420 01:12:59.335929  128374 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:12:59.335962  128374 kubeadm.go:309] 	--control-plane 
	I0420 01:12:59.335970  128374 kubeadm.go:309] 
	I0420 01:12:59.336061  128374 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:12:59.336083  128374 kubeadm.go:309] 
	I0420 01:12:59.336197  128374 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token vdqi9x.vzzfi3lwb5bpbna7 \
	I0420 01:12:59.336297  128374 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:12:59.336849  128374 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:12:59.336946  128374 cni.go:84] Creating CNI manager for "bridge"
	I0420 01:12:59.338722  128374 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:12:57.498890  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:12:57.499303  128503 main.go:141] libmachine: (kindnet-831611) DBG | unable to find current IP address of domain kindnet-831611 in network mk-kindnet-831611
	I0420 01:12:57.499337  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:12:57.499236  130200 retry.go:31] will retry after 3.78307259s: waiting for machine to come up
	I0420 01:12:59.339925  128374 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:12:59.356247  128374 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:12:59.381325  128374 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:12:59.381399  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:12:59.381428  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-831611 minikube.k8s.io/updated_at=2024_04_20T01_12_59_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=enable-default-cni-831611 minikube.k8s.io/primary=true
	I0420 01:12:59.500994  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:12:59.585944  128374 ops.go:34] apiserver oom_adj: -16
	I0420 01:13:00.001739  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:00.501252  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:01.001954  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:01.500943  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:02.001018  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:02.501015  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:03.001665  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:01.287129  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:01.287641  128503 main.go:141] libmachine: (kindnet-831611) DBG | unable to find current IP address of domain kindnet-831611 in network mk-kindnet-831611
	I0420 01:13:01.287694  128503 main.go:141] libmachine: (kindnet-831611) DBG | I0420 01:13:01.287593  130200 retry.go:31] will retry after 4.184036234s: waiting for machine to come up
	I0420 01:13:03.501331  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:04.001108  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:04.501029  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:05.001701  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:05.501052  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:06.001265  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:06.501558  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:07.001305  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:07.501899  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:08.001569  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:05.474376  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:05.474784  128503 main.go:141] libmachine: (kindnet-831611) Found IP for machine: 192.168.61.217
	I0420 01:13:05.474813  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has current primary IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:05.474820  128503 main.go:141] libmachine: (kindnet-831611) Reserving static IP address...
	I0420 01:13:05.475099  128503 main.go:141] libmachine: (kindnet-831611) DBG | unable to find host DHCP lease matching {name: "kindnet-831611", mac: "52:54:00:76:2a:a5", ip: "192.168.61.217"} in network mk-kindnet-831611
	I0420 01:13:05.554120  128503 main.go:141] libmachine: (kindnet-831611) DBG | Getting to WaitForSSH function...
	I0420 01:13:05.554152  128503 main.go:141] libmachine: (kindnet-831611) Reserved static IP address: 192.168.61.217
	I0420 01:13:05.554183  128503 main.go:141] libmachine: (kindnet-831611) Waiting for SSH to be available...
	I0420 01:13:05.557147  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:05.557526  128503 main.go:141] libmachine: (kindnet-831611) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611
	I0420 01:13:05.557553  128503 main.go:141] libmachine: (kindnet-831611) DBG | unable to find defined IP address of network mk-kindnet-831611 interface with MAC address 52:54:00:76:2a:a5
	I0420 01:13:05.557715  128503 main.go:141] libmachine: (kindnet-831611) DBG | Using SSH client type: external
	I0420 01:13:05.557755  128503 main.go:141] libmachine: (kindnet-831611) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611/id_rsa (-rw-------)
	I0420 01:13:05.557781  128503 main.go:141] libmachine: (kindnet-831611) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:13:05.557796  128503 main.go:141] libmachine: (kindnet-831611) DBG | About to run SSH command:
	I0420 01:13:05.557812  128503 main.go:141] libmachine: (kindnet-831611) DBG | exit 0
	I0420 01:13:05.561345  128503 main.go:141] libmachine: (kindnet-831611) DBG | SSH cmd err, output: exit status 255: 
	I0420 01:13:05.561383  128503 main.go:141] libmachine: (kindnet-831611) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0420 01:13:05.561417  128503 main.go:141] libmachine: (kindnet-831611) DBG | command : exit 0
	I0420 01:13:05.561437  128503 main.go:141] libmachine: (kindnet-831611) DBG | err     : exit status 255
	I0420 01:13:05.561453  128503 main.go:141] libmachine: (kindnet-831611) DBG | output  : 
	I0420 01:13:08.561497  128503 main.go:141] libmachine: (kindnet-831611) DBG | Getting to WaitForSSH function...
	I0420 01:13:08.564347  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:08.564825  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:08.564858  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:08.565056  128503 main.go:141] libmachine: (kindnet-831611) DBG | Using SSH client type: external
	I0420 01:13:08.565088  128503 main.go:141] libmachine: (kindnet-831611) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611/id_rsa (-rw-------)
	I0420 01:13:08.565128  128503 main.go:141] libmachine: (kindnet-831611) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:13:08.565144  128503 main.go:141] libmachine: (kindnet-831611) DBG | About to run SSH command:
	I0420 01:13:08.565156  128503 main.go:141] libmachine: (kindnet-831611) DBG | exit 0
	I0420 01:13:08.697592  128503 main.go:141] libmachine: (kindnet-831611) DBG | SSH cmd err, output: <nil>: 
	I0420 01:13:08.697836  128503 main.go:141] libmachine: (kindnet-831611) KVM machine creation complete!
	I0420 01:13:08.698128  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetConfigRaw
	I0420 01:13:08.698783  128503 main.go:141] libmachine: (kindnet-831611) Calling .DriverName
	I0420 01:13:08.698977  128503 main.go:141] libmachine: (kindnet-831611) Calling .DriverName
	I0420 01:13:08.699131  128503 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0420 01:13:08.699147  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetState
	I0420 01:13:08.700452  128503 main.go:141] libmachine: Detecting operating system of created instance...
	I0420 01:13:08.700469  128503 main.go:141] libmachine: Waiting for SSH to be available...
	I0420 01:13:08.700477  128503 main.go:141] libmachine: Getting to WaitForSSH function...
	I0420 01:13:08.700484  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHHostname
	I0420 01:13:08.702854  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:08.703224  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:08.703260  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:08.703422  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHPort
	I0420 01:13:08.703596  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:08.703718  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:08.703842  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHUsername
	I0420 01:13:08.704033  128503 main.go:141] libmachine: Using SSH client type: native
	I0420 01:13:08.704235  128503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I0420 01:13:08.704247  128503 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0420 01:13:08.816854  128503 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:13:08.816881  128503 main.go:141] libmachine: Detecting the provisioner...
	I0420 01:13:08.816912  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHHostname
	I0420 01:13:08.819700  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:08.820042  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:08.820073  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:08.820246  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHPort
	I0420 01:13:08.820482  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:08.820635  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:08.820766  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHUsername
	I0420 01:13:08.820908  128503 main.go:141] libmachine: Using SSH client type: native
	I0420 01:13:08.821115  128503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I0420 01:13:08.821137  128503 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0420 01:13:08.934857  128503 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0420 01:13:08.934942  128503 main.go:141] libmachine: found compatible host: buildroot
	I0420 01:13:08.934957  128503 main.go:141] libmachine: Provisioning with buildroot...
	I0420 01:13:08.934971  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetMachineName
	I0420 01:13:08.935291  128503 buildroot.go:166] provisioning hostname "kindnet-831611"
	I0420 01:13:08.935326  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetMachineName
	I0420 01:13:08.935552  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHHostname
	I0420 01:13:08.938494  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:08.938914  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:08.938948  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:08.939135  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHPort
	I0420 01:13:08.939348  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:08.939503  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:08.939643  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHUsername
	I0420 01:13:08.939875  128503 main.go:141] libmachine: Using SSH client type: native
	I0420 01:13:08.940057  128503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I0420 01:13:08.940071  128503 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-831611 && echo "kindnet-831611" | sudo tee /etc/hostname
	I0420 01:13:09.079206  128503 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-831611
	
	I0420 01:13:09.079243  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHHostname
	I0420 01:13:09.082850  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.083521  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:09.083551  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.083816  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHPort
	I0420 01:13:09.084021  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:09.084208  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:09.084360  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHUsername
	I0420 01:13:09.084626  128503 main.go:141] libmachine: Using SSH client type: native
	I0420 01:13:09.084836  128503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I0420 01:13:09.084855  128503 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-831611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-831611/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-831611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:13:09.212455  128503 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:13:09.212491  128503 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:13:09.212531  128503 buildroot.go:174] setting up certificates
	I0420 01:13:09.212541  128503 provision.go:84] configureAuth start
	I0420 01:13:09.212556  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetMachineName
	I0420 01:13:09.212881  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetIP
	I0420 01:13:09.215670  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.216060  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:09.216088  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.216228  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHHostname
	I0420 01:13:09.218690  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.219031  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:09.219059  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.219172  128503 provision.go:143] copyHostCerts
	I0420 01:13:09.219237  128503 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:13:09.219250  128503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:13:09.219325  128503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:13:09.219465  128503 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:13:09.219480  128503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:13:09.219523  128503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:13:09.219626  128503 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:13:09.219638  128503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:13:09.219669  128503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:13:09.219761  128503 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.kindnet-831611 san=[127.0.0.1 192.168.61.217 kindnet-831611 localhost minikube]
	I0420 01:13:09.332968  128503 provision.go:177] copyRemoteCerts
	I0420 01:13:09.333024  128503 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:13:09.333049  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHHostname
	I0420 01:13:09.335926  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.336298  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:09.336337  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.336481  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHPort
	I0420 01:13:09.336695  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:09.336886  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHUsername
	I0420 01:13:09.337020  128503 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611/id_rsa Username:docker}
	I0420 01:13:09.425305  128503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:13:09.454536  128503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0420 01:13:09.482730  128503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:13:09.511245  128503 provision.go:87] duration metric: took 298.687044ms to configureAuth
	I0420 01:13:09.511274  128503 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:13:09.511477  128503 config.go:182] Loaded profile config "kindnet-831611": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:13:09.511569  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHHostname
	I0420 01:13:09.514495  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.514957  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:09.514987  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.515213  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHPort
	I0420 01:13:09.515428  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:09.515580  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:09.515781  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHUsername
	I0420 01:13:09.515997  128503 main.go:141] libmachine: Using SSH client type: native
	I0420 01:13:09.516212  128503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I0420 01:13:09.516229  128503 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:13:09.828905  128503 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:13:09.828934  128503 main.go:141] libmachine: Checking connection to Docker...
	I0420 01:13:09.828944  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetURL
	I0420 01:13:09.830224  128503 main.go:141] libmachine: (kindnet-831611) DBG | Using libvirt version 6000000
	I0420 01:13:09.832615  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.832946  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:09.832974  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.833159  128503 main.go:141] libmachine: Docker is up and running!
	I0420 01:13:09.833173  128503 main.go:141] libmachine: Reticulating splines...
	I0420 01:13:09.833181  128503 client.go:171] duration metric: took 28.40417787s to LocalClient.Create
	I0420 01:13:09.833209  128503 start.go:167] duration metric: took 28.404246885s to libmachine.API.Create "kindnet-831611"
	I0420 01:13:09.833221  128503 start.go:293] postStartSetup for "kindnet-831611" (driver="kvm2")
	I0420 01:13:09.833239  128503 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:13:09.833259  128503 main.go:141] libmachine: (kindnet-831611) Calling .DriverName
	I0420 01:13:09.833522  128503 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:13:09.833546  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHHostname
	I0420 01:13:09.836029  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.836400  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:09.836450  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.836553  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHPort
	I0420 01:13:09.836725  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:09.836916  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHUsername
	I0420 01:13:09.837056  128503 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611/id_rsa Username:docker}
	I0420 01:13:10.098770  129976 start.go:364] duration metric: took 45.936314197s to acquireMachinesLock for "flannel-831611"
	I0420 01:13:10.098871  129976 start.go:93] Provisioning new machine with config: &{Name:flannel-831611 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:flannel-831611 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:13:10.099028  129976 start.go:125] createHost starting for "" (driver="kvm2")
	I0420 01:13:08.501221  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:09.001677  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:09.501365  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:10.001533  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:10.501006  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:11.001462  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:11.501002  128374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:11.611386  128374 kubeadm.go:1107] duration metric: took 12.23003283s to wait for elevateKubeSystemPrivileges
	W0420 01:13:11.611435  128374 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:13:11.611448  128374 kubeadm.go:393] duration metric: took 22.560816841s to StartCluster
	I0420 01:13:11.611470  128374 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:11.611554  128374 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:13:11.613205  128374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:11.613620  128374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0420 01:13:11.613807  128374 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:13:11.613884  128374 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-831611"
	I0420 01:13:11.613907  128374 config.go:182] Loaded profile config "enable-default-cni-831611": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:13:11.613918  128374 addons.go:234] Setting addon storage-provisioner=true in "enable-default-cni-831611"
	I0420 01:13:11.613958  128374 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-831611"
	I0420 01:13:11.613964  128374 host.go:66] Checking if "enable-default-cni-831611" exists ...
	I0420 01:13:11.613993  128374 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-831611"
	I0420 01:13:11.614469  128374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:13:11.614475  128374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:13:11.614512  128374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:13:11.614517  128374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:13:11.613636  128374 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:13:11.617498  128374 out.go:177] * Verifying Kubernetes components...
	I0420 01:13:11.619407  128374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:13:11.634650  128374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43619
	I0420 01:13:11.635325  128374 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:13:11.635493  128374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33629
	I0420 01:13:11.635919  128374 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:13:11.636530  128374 main.go:141] libmachine: Using API Version  1
	I0420 01:13:11.636560  128374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:13:11.636965  128374 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:13:11.637142  128374 main.go:141] libmachine: Using API Version  1
	I0420 01:13:11.637157  128374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:13:11.637231  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetState
	I0420 01:13:11.637834  128374 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:13:11.638565  128374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:13:11.638609  128374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:13:11.643435  128374 addons.go:234] Setting addon default-storageclass=true in "enable-default-cni-831611"
	I0420 01:13:11.643483  128374 host.go:66] Checking if "enable-default-cni-831611" exists ...
	I0420 01:13:11.643913  128374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:13:11.643948  128374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:13:11.661082  128374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44811
	I0420 01:13:11.661845  128374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37885
	I0420 01:13:11.662059  128374 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:13:11.662719  128374 main.go:141] libmachine: Using API Version  1
	I0420 01:13:11.662746  128374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:13:11.662832  128374 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:13:11.663077  128374 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:13:11.663212  128374 main.go:141] libmachine: Using API Version  1
	I0420 01:13:11.663225  128374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:13:11.663823  128374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:13:11.663859  128374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:13:11.664015  128374 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:13:11.664201  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetState
	I0420 01:13:11.666552  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .DriverName
	I0420 01:13:11.671396  128374 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:13:09.925797  128503 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:13:09.930708  128503 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:13:09.930733  128503 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:13:09.930802  128503 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:13:09.930892  128503 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:13:09.931010  128503 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:13:09.941880  128503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:13:09.970078  128503 start.go:296] duration metric: took 136.835313ms for postStartSetup
	I0420 01:13:09.970134  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetConfigRaw
	I0420 01:13:09.970684  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetIP
	I0420 01:13:09.973158  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.973554  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:09.973578  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.973870  128503 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/config.json ...
	I0420 01:13:09.974095  128503 start.go:128] duration metric: took 28.56685206s to createHost
	I0420 01:13:09.974124  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHHostname
	I0420 01:13:09.976652  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.976976  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:09.977007  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:09.977189  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHPort
	I0420 01:13:09.977374  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:09.977561  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:09.977755  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHUsername
	I0420 01:13:09.977946  128503 main.go:141] libmachine: Using SSH client type: native
	I0420 01:13:09.978148  128503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I0420 01:13:09.978164  128503 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:13:10.098600  128503 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713575590.084234803
	
	I0420 01:13:10.098625  128503 fix.go:216] guest clock: 1713575590.084234803
	I0420 01:13:10.098635  128503 fix.go:229] Guest: 2024-04-20 01:13:10.084234803 +0000 UTC Remote: 2024-04-20 01:13:09.974108355 +0000 UTC m=+70.113269336 (delta=110.126448ms)
	I0420 01:13:10.098680  128503 fix.go:200] guest clock delta is within tolerance: 110.126448ms
	I0420 01:13:10.098691  128503 start.go:83] releasing machines lock for "kindnet-831611", held for 28.691660726s
	I0420 01:13:10.098725  128503 main.go:141] libmachine: (kindnet-831611) Calling .DriverName
	I0420 01:13:10.099006  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetIP
	I0420 01:13:10.102009  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:10.102375  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:10.102404  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:10.102567  128503 main.go:141] libmachine: (kindnet-831611) Calling .DriverName
	I0420 01:13:10.103078  128503 main.go:141] libmachine: (kindnet-831611) Calling .DriverName
	I0420 01:13:10.103294  128503 main.go:141] libmachine: (kindnet-831611) Calling .DriverName
	I0420 01:13:10.103381  128503 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:13:10.103419  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHHostname
	I0420 01:13:10.103526  128503 ssh_runner.go:195] Run: cat /version.json
	I0420 01:13:10.103553  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHHostname
	I0420 01:13:10.106329  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:10.106522  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:10.106788  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:10.106811  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:10.106972  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHPort
	I0420 01:13:10.107122  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:10.107133  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:10.107145  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:10.107301  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHUsername
	I0420 01:13:10.107398  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHPort
	I0420 01:13:10.107481  128503 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611/id_rsa Username:docker}
	I0420 01:13:10.107797  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:10.108032  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHUsername
	I0420 01:13:10.108201  128503 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611/id_rsa Username:docker}
	I0420 01:13:10.218762  128503 ssh_runner.go:195] Run: systemctl --version
	I0420 01:13:10.227035  128503 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:13:10.395053  128503 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:13:10.403137  128503 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:13:10.403213  128503 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:13:10.424372  128503 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:13:10.424398  128503 start.go:494] detecting cgroup driver to use...
	I0420 01:13:10.424465  128503 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:13:10.444973  128503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:13:10.461982  128503 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:13:10.462041  128503 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:13:10.480366  128503 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:13:10.499118  128503 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:13:10.640415  128503 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:13:10.831010  128503 docker.go:233] disabling docker service ...
	I0420 01:13:10.831081  128503 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:13:10.850166  128503 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:13:10.865478  128503 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:13:11.008276  128503 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:13:11.126823  128503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:13:11.146676  128503 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:13:11.171602  128503 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:13:11.171673  128503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:11.184131  128503 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:13:11.184211  128503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:11.196305  128503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:11.207611  128503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:11.221169  128503 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:13:11.235862  128503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:11.247706  128503 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:11.273215  128503 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:11.289672  128503 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:13:11.300219  128503 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:13:11.300280  128503 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:13:11.316182  128503 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:13:11.327726  128503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:13:11.465891  128503 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:13:11.633826  128503 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:13:11.633919  128503 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:13:11.654315  128503 start.go:562] Will wait 60s for crictl version
	I0420 01:13:11.654467  128503 ssh_runner.go:195] Run: which crictl
	I0420 01:13:11.661140  128503 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:13:11.716965  128503 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:13:11.717062  128503 ssh_runner.go:195] Run: crio --version
	I0420 01:13:11.763865  128503 ssh_runner.go:195] Run: crio --version
	I0420 01:13:11.815265  128503 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:13:11.672969  128374 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:13:11.672984  128374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:13:11.672999  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHHostname
	I0420 01:13:11.676405  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:13:11.676848  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:13:11.676879  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:13:11.677031  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHPort
	I0420 01:13:11.677511  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:13:11.677694  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHUsername
	I0420 01:13:11.677844  128374 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/enable-default-cni-831611/id_rsa Username:docker}
	I0420 01:13:11.685252  128374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34945
	I0420 01:13:11.685820  128374 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:13:11.686683  128374 main.go:141] libmachine: Using API Version  1
	I0420 01:13:11.686707  128374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:13:11.687691  128374 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:13:11.688163  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetState
	I0420 01:13:11.690152  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .DriverName
	I0420 01:13:11.690450  128374 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:13:11.690463  128374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:13:11.690481  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHHostname
	I0420 01:13:11.693066  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:13:11.693647  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:8b:de", ip: ""} in network mk-enable-default-cni-831611: {Iface:virbr1 ExpiryTime:2024-04-20 02:12:29 +0000 UTC Type:0 Mac:52:54:00:f2:8b:de Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:enable-default-cni-831611 Clientid:01:52:54:00:f2:8b:de}
	I0420 01:13:11.693676  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | domain enable-default-cni-831611 has defined IP address 192.168.39.125 and MAC address 52:54:00:f2:8b:de in network mk-enable-default-cni-831611
	I0420 01:13:11.693820  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHPort
	I0420 01:13:11.693966  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHKeyPath
	I0420 01:13:11.694092  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .GetSSHUsername
	I0420 01:13:11.694203  128374 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/enable-default-cni-831611/id_rsa Username:docker}
	I0420 01:13:11.905417  128374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0420 01:13:11.993064  128374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:13:12.428542  128374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:13:12.468362  128374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:13:13.088066  128374 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.182581099s)
	I0420 01:13:13.088104  128374 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.095001165s)
	I0420 01:13:13.088180  128374 main.go:141] libmachine: Making call to close driver server
	I0420 01:13:13.088212  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .Close
	I0420 01:13:13.088106  128374 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0420 01:13:13.088599  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | Closing plugin on server side
	I0420 01:13:13.088640  128374 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:13:13.088647  128374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:13:13.088655  128374 main.go:141] libmachine: Making call to close driver server
	I0420 01:13:13.088662  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .Close
	I0420 01:13:13.090086  128374 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-831611" to be "Ready" ...
	I0420 01:13:13.093435  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | Closing plugin on server side
	I0420 01:13:13.093559  128374 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:13:13.093621  128374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:13:13.115587  128374 node_ready.go:49] node "enable-default-cni-831611" has status "Ready":"True"
	I0420 01:13:13.115618  128374 node_ready.go:38] duration metric: took 25.481909ms for node "enable-default-cni-831611" to be "Ready" ...
	I0420 01:13:13.115631  128374 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:13:13.133187  128374 main.go:141] libmachine: Making call to close driver server
	I0420 01:13:13.133222  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .Close
	I0420 01:13:13.133577  128374 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:13:13.133611  128374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:13:13.144932  128374 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-l8ghq" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:13.599647  128374 kapi.go:248] "coredns" deployment in "kube-system" namespace and "enable-default-cni-831611" context rescaled to 1 replicas
	I0420 01:13:14.066008  128374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.597602508s)
	I0420 01:13:14.066075  128374 main.go:141] libmachine: Making call to close driver server
	I0420 01:13:14.066088  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .Close
	I0420 01:13:14.066403  128374 main.go:141] libmachine: (enable-default-cni-831611) DBG | Closing plugin on server side
	I0420 01:13:14.066448  128374 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:13:14.066456  128374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:13:14.066464  128374 main.go:141] libmachine: Making call to close driver server
	I0420 01:13:14.066472  128374 main.go:141] libmachine: (enable-default-cni-831611) Calling .Close
	I0420 01:13:14.066686  128374 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:13:14.066718  128374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:13:14.068678  128374 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0420 01:13:10.101124  129976 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0420 01:13:10.101295  129976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:13:10.101359  129976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:13:10.118354  129976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I0420 01:13:10.119002  129976 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:13:10.119663  129976 main.go:141] libmachine: Using API Version  1
	I0420 01:13:10.119698  129976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:13:10.120116  129976 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:13:10.120314  129976 main.go:141] libmachine: (flannel-831611) Calling .GetMachineName
	I0420 01:13:10.120473  129976 main.go:141] libmachine: (flannel-831611) Calling .DriverName
	I0420 01:13:10.120676  129976 start.go:159] libmachine.API.Create for "flannel-831611" (driver="kvm2")
	I0420 01:13:10.120719  129976 client.go:168] LocalClient.Create starting
	I0420 01:13:10.120756  129976 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem
	I0420 01:13:10.120800  129976 main.go:141] libmachine: Decoding PEM data...
	I0420 01:13:10.120825  129976 main.go:141] libmachine: Parsing certificate...
	I0420 01:13:10.120900  129976 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem
	I0420 01:13:10.120927  129976 main.go:141] libmachine: Decoding PEM data...
	I0420 01:13:10.120946  129976 main.go:141] libmachine: Parsing certificate...
	I0420 01:13:10.120969  129976 main.go:141] libmachine: Running pre-create checks...
	I0420 01:13:10.120987  129976 main.go:141] libmachine: (flannel-831611) Calling .PreCreateCheck
	I0420 01:13:10.121420  129976 main.go:141] libmachine: (flannel-831611) Calling .GetConfigRaw
	I0420 01:13:10.121863  129976 main.go:141] libmachine: Creating machine...
	I0420 01:13:10.121881  129976 main.go:141] libmachine: (flannel-831611) Calling .Create
	I0420 01:13:10.122009  129976 main.go:141] libmachine: (flannel-831611) Creating KVM machine...
	I0420 01:13:10.123318  129976 main.go:141] libmachine: (flannel-831611) DBG | found existing default KVM network
	I0420 01:13:10.124934  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:10.124750  130497 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:8b:5c:06} reservation:<nil>}
	I0420 01:13:10.125829  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:10.125727  130497 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b0:02:92} reservation:<nil>}
	I0420 01:13:10.126932  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:10.126831  130497 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:af:12:37} reservation:<nil>}
	I0420 01:13:10.128237  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:10.128155  130497 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003071f0}
	I0420 01:13:10.128265  129976 main.go:141] libmachine: (flannel-831611) DBG | created network xml: 
	I0420 01:13:10.128278  129976 main.go:141] libmachine: (flannel-831611) DBG | <network>
	I0420 01:13:10.128287  129976 main.go:141] libmachine: (flannel-831611) DBG |   <name>mk-flannel-831611</name>
	I0420 01:13:10.128296  129976 main.go:141] libmachine: (flannel-831611) DBG |   <dns enable='no'/>
	I0420 01:13:10.128307  129976 main.go:141] libmachine: (flannel-831611) DBG |   
	I0420 01:13:10.128317  129976 main.go:141] libmachine: (flannel-831611) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0420 01:13:10.128335  129976 main.go:141] libmachine: (flannel-831611) DBG |     <dhcp>
	I0420 01:13:10.128346  129976 main.go:141] libmachine: (flannel-831611) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0420 01:13:10.128355  129976 main.go:141] libmachine: (flannel-831611) DBG |     </dhcp>
	I0420 01:13:10.128365  129976 main.go:141] libmachine: (flannel-831611) DBG |   </ip>
	I0420 01:13:10.128372  129976 main.go:141] libmachine: (flannel-831611) DBG |   
	I0420 01:13:10.128381  129976 main.go:141] libmachine: (flannel-831611) DBG | </network>
	I0420 01:13:10.128392  129976 main.go:141] libmachine: (flannel-831611) DBG | 
	I0420 01:13:10.133974  129976 main.go:141] libmachine: (flannel-831611) DBG | trying to create private KVM network mk-flannel-831611 192.168.72.0/24...
	I0420 01:13:10.205868  129976 main.go:141] libmachine: (flannel-831611) DBG | private KVM network mk-flannel-831611 192.168.72.0/24 created
	I0420 01:13:10.205927  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:10.205842  130497 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:13:10.205953  129976 main.go:141] libmachine: (flannel-831611) Setting up store path in /home/jenkins/minikube-integration/18703-76456/.minikube/machines/flannel-831611 ...
	I0420 01:13:10.205969  129976 main.go:141] libmachine: (flannel-831611) Building disk image from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0420 01:13:10.206060  129976 main.go:141] libmachine: (flannel-831611) Downloading /home/jenkins/minikube-integration/18703-76456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0420 01:13:10.449137  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:10.449007  130497 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/flannel-831611/id_rsa...
	I0420 01:13:10.653514  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:10.653397  130497 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/flannel-831611/flannel-831611.rawdisk...
	I0420 01:13:10.653560  129976 main.go:141] libmachine: (flannel-831611) DBG | Writing magic tar header
	I0420 01:13:10.653606  129976 main.go:141] libmachine: (flannel-831611) DBG | Writing SSH key tar header
	I0420 01:13:10.653632  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:10.653584  130497 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/flannel-831611 ...
	I0420 01:13:10.653791  129976 main.go:141] libmachine: (flannel-831611) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/flannel-831611
	I0420 01:13:10.653825  129976 main.go:141] libmachine: (flannel-831611) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines
	I0420 01:13:10.653839  129976 main.go:141] libmachine: (flannel-831611) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/flannel-831611 (perms=drwx------)
	I0420 01:13:10.653859  129976 main.go:141] libmachine: (flannel-831611) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines (perms=drwxr-xr-x)
	I0420 01:13:10.653879  129976 main.go:141] libmachine: (flannel-831611) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube (perms=drwxr-xr-x)
	I0420 01:13:10.653896  129976 main.go:141] libmachine: (flannel-831611) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456 (perms=drwxrwxr-x)
	I0420 01:13:10.653921  129976 main.go:141] libmachine: (flannel-831611) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:13:10.653935  129976 main.go:141] libmachine: (flannel-831611) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0420 01:13:10.653946  129976 main.go:141] libmachine: (flannel-831611) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0420 01:13:10.653953  129976 main.go:141] libmachine: (flannel-831611) Creating domain...
	I0420 01:13:10.653968  129976 main.go:141] libmachine: (flannel-831611) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456
	I0420 01:13:10.653977  129976 main.go:141] libmachine: (flannel-831611) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0420 01:13:10.654012  129976 main.go:141] libmachine: (flannel-831611) DBG | Checking permissions on dir: /home/jenkins
	I0420 01:13:10.654047  129976 main.go:141] libmachine: (flannel-831611) DBG | Checking permissions on dir: /home
	I0420 01:13:10.654064  129976 main.go:141] libmachine: (flannel-831611) DBG | Skipping /home - not owner
	I0420 01:13:10.655145  129976 main.go:141] libmachine: (flannel-831611) define libvirt domain using xml: 
	I0420 01:13:10.655168  129976 main.go:141] libmachine: (flannel-831611) <domain type='kvm'>
	I0420 01:13:10.655193  129976 main.go:141] libmachine: (flannel-831611)   <name>flannel-831611</name>
	I0420 01:13:10.655206  129976 main.go:141] libmachine: (flannel-831611)   <memory unit='MiB'>3072</memory>
	I0420 01:13:10.655216  129976 main.go:141] libmachine: (flannel-831611)   <vcpu>2</vcpu>
	I0420 01:13:10.655225  129976 main.go:141] libmachine: (flannel-831611)   <features>
	I0420 01:13:10.655242  129976 main.go:141] libmachine: (flannel-831611)     <acpi/>
	I0420 01:13:10.655254  129976 main.go:141] libmachine: (flannel-831611)     <apic/>
	I0420 01:13:10.655311  129976 main.go:141] libmachine: (flannel-831611)     <pae/>
	I0420 01:13:10.655338  129976 main.go:141] libmachine: (flannel-831611)     
	I0420 01:13:10.655349  129976 main.go:141] libmachine: (flannel-831611)   </features>
	I0420 01:13:10.655365  129976 main.go:141] libmachine: (flannel-831611)   <cpu mode='host-passthrough'>
	I0420 01:13:10.655374  129976 main.go:141] libmachine: (flannel-831611)   
	I0420 01:13:10.655381  129976 main.go:141] libmachine: (flannel-831611)   </cpu>
	I0420 01:13:10.655391  129976 main.go:141] libmachine: (flannel-831611)   <os>
	I0420 01:13:10.655398  129976 main.go:141] libmachine: (flannel-831611)     <type>hvm</type>
	I0420 01:13:10.655408  129976 main.go:141] libmachine: (flannel-831611)     <boot dev='cdrom'/>
	I0420 01:13:10.655415  129976 main.go:141] libmachine: (flannel-831611)     <boot dev='hd'/>
	I0420 01:13:10.655427  129976 main.go:141] libmachine: (flannel-831611)     <bootmenu enable='no'/>
	I0420 01:13:10.655435  129976 main.go:141] libmachine: (flannel-831611)   </os>
	I0420 01:13:10.655460  129976 main.go:141] libmachine: (flannel-831611)   <devices>
	I0420 01:13:10.655484  129976 main.go:141] libmachine: (flannel-831611)     <disk type='file' device='cdrom'>
	I0420 01:13:10.655498  129976 main.go:141] libmachine: (flannel-831611)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/flannel-831611/boot2docker.iso'/>
	I0420 01:13:10.655509  129976 main.go:141] libmachine: (flannel-831611)       <target dev='hdc' bus='scsi'/>
	I0420 01:13:10.655520  129976 main.go:141] libmachine: (flannel-831611)       <readonly/>
	I0420 01:13:10.655530  129976 main.go:141] libmachine: (flannel-831611)     </disk>
	I0420 01:13:10.655549  129976 main.go:141] libmachine: (flannel-831611)     <disk type='file' device='disk'>
	I0420 01:13:10.655565  129976 main.go:141] libmachine: (flannel-831611)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0420 01:13:10.655582  129976 main.go:141] libmachine: (flannel-831611)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/flannel-831611/flannel-831611.rawdisk'/>
	I0420 01:13:10.655593  129976 main.go:141] libmachine: (flannel-831611)       <target dev='hda' bus='virtio'/>
	I0420 01:13:10.655601  129976 main.go:141] libmachine: (flannel-831611)     </disk>
	I0420 01:13:10.655610  129976 main.go:141] libmachine: (flannel-831611)     <interface type='network'>
	I0420 01:13:10.655616  129976 main.go:141] libmachine: (flannel-831611)       <source network='mk-flannel-831611'/>
	I0420 01:13:10.655624  129976 main.go:141] libmachine: (flannel-831611)       <model type='virtio'/>
	I0420 01:13:10.655642  129976 main.go:141] libmachine: (flannel-831611)     </interface>
	I0420 01:13:10.655661  129976 main.go:141] libmachine: (flannel-831611)     <interface type='network'>
	I0420 01:13:10.655674  129976 main.go:141] libmachine: (flannel-831611)       <source network='default'/>
	I0420 01:13:10.655688  129976 main.go:141] libmachine: (flannel-831611)       <model type='virtio'/>
	I0420 01:13:10.655699  129976 main.go:141] libmachine: (flannel-831611)     </interface>
	I0420 01:13:10.655709  129976 main.go:141] libmachine: (flannel-831611)     <serial type='pty'>
	I0420 01:13:10.655720  129976 main.go:141] libmachine: (flannel-831611)       <target port='0'/>
	I0420 01:13:10.655730  129976 main.go:141] libmachine: (flannel-831611)     </serial>
	I0420 01:13:10.655738  129976 main.go:141] libmachine: (flannel-831611)     <console type='pty'>
	I0420 01:13:10.655749  129976 main.go:141] libmachine: (flannel-831611)       <target type='serial' port='0'/>
	I0420 01:13:10.655778  129976 main.go:141] libmachine: (flannel-831611)     </console>
	I0420 01:13:10.655802  129976 main.go:141] libmachine: (flannel-831611)     <rng model='virtio'>
	I0420 01:13:10.655818  129976 main.go:141] libmachine: (flannel-831611)       <backend model='random'>/dev/random</backend>
	I0420 01:13:10.655828  129976 main.go:141] libmachine: (flannel-831611)     </rng>
	I0420 01:13:10.655839  129976 main.go:141] libmachine: (flannel-831611)     
	I0420 01:13:10.655848  129976 main.go:141] libmachine: (flannel-831611)     
	I0420 01:13:10.655858  129976 main.go:141] libmachine: (flannel-831611)   </devices>
	I0420 01:13:10.655870  129976 main.go:141] libmachine: (flannel-831611) </domain>
	I0420 01:13:10.655882  129976 main.go:141] libmachine: (flannel-831611) 
	I0420 01:13:10.664108  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:ea:93:5a in network default
	I0420 01:13:10.664882  129976 main.go:141] libmachine: (flannel-831611) Ensuring networks are active...
	I0420 01:13:10.664909  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:10.665878  129976 main.go:141] libmachine: (flannel-831611) Ensuring network default is active
	I0420 01:13:10.666257  129976 main.go:141] libmachine: (flannel-831611) Ensuring network mk-flannel-831611 is active
	I0420 01:13:10.666964  129976 main.go:141] libmachine: (flannel-831611) Getting domain xml...
	I0420 01:13:10.668019  129976 main.go:141] libmachine: (flannel-831611) Creating domain...
	I0420 01:13:12.083453  129976 main.go:141] libmachine: (flannel-831611) Waiting to get IP...
	I0420 01:13:12.084528  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:12.085171  129976 main.go:141] libmachine: (flannel-831611) DBG | unable to find current IP address of domain flannel-831611 in network mk-flannel-831611
	I0420 01:13:12.085195  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:12.085127  130497 retry.go:31] will retry after 237.721218ms: waiting for machine to come up
	I0420 01:13:12.324809  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:12.325841  129976 main.go:141] libmachine: (flannel-831611) DBG | unable to find current IP address of domain flannel-831611 in network mk-flannel-831611
	I0420 01:13:12.325865  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:12.325489  130497 retry.go:31] will retry after 304.906977ms: waiting for machine to come up
	I0420 01:13:12.632146  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:12.632719  129976 main.go:141] libmachine: (flannel-831611) DBG | unable to find current IP address of domain flannel-831611 in network mk-flannel-831611
	I0420 01:13:12.632743  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:12.632665  130497 retry.go:31] will retry after 436.885029ms: waiting for machine to come up
	I0420 01:13:13.071384  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:13.072205  129976 main.go:141] libmachine: (flannel-831611) DBG | unable to find current IP address of domain flannel-831611 in network mk-flannel-831611
	I0420 01:13:13.072239  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:13.072098  130497 retry.go:31] will retry after 480.206433ms: waiting for machine to come up
	I0420 01:13:13.553656  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:13.554249  129976 main.go:141] libmachine: (flannel-831611) DBG | unable to find current IP address of domain flannel-831611 in network mk-flannel-831611
	I0420 01:13:13.554296  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:13.554204  130497 retry.go:31] will retry after 562.014813ms: waiting for machine to come up
	I0420 01:13:11.816805  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetIP
	I0420 01:13:11.820111  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:11.820577  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:11.820615  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:11.820859  128503 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0420 01:13:11.826703  128503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:13:11.843940  128503 kubeadm.go:877] updating cluster {Name:kindnet-831611 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-831611 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:13:11.844088  128503 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:13:11.844155  128503 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:13:11.895404  128503 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:13:11.895491  128503 ssh_runner.go:195] Run: which lz4
	I0420 01:13:11.900839  128503 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:13:11.908027  128503 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:13:11.908067  128503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:13:13.870833  128503 crio.go:462] duration metric: took 1.970022349s to copy over tarball
	I0420 01:13:13.870930  128503 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:13:14.069790  128374 addons.go:505] duration metric: took 2.455984426s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0420 01:13:15.153346  128374 pod_ready.go:102] pod "coredns-7db6d8ff4d-l8ghq" in "kube-system" namespace has status "Ready":"False"
	I0420 01:13:17.153504  128374 pod_ready.go:102] pod "coredns-7db6d8ff4d-l8ghq" in "kube-system" namespace has status "Ready":"False"
	I0420 01:13:16.944318  128503 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.073344777s)
	I0420 01:13:16.944356  128503 crio.go:469] duration metric: took 3.073484868s to extract the tarball
	I0420 01:13:16.944367  128503 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:13:16.989144  128503 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:13:17.044615  128503 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:13:17.044644  128503 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:13:17.044654  128503 kubeadm.go:928] updating node { 192.168.61.217 8443 v1.30.0 crio true true} ...
	I0420 01:13:17.044789  128503 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-831611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:kindnet-831611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0420 01:13:17.044871  128503 ssh_runner.go:195] Run: crio config
	I0420 01:13:17.109629  128503 cni.go:84] Creating CNI manager for "kindnet"
	I0420 01:13:17.109654  128503 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:13:17.109676  128503 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.217 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-831611 NodeName:kindnet-831611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:13:17.109859  128503 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-831611"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:13:17.109932  128503 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:13:17.123226  128503 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:13:17.123290  128503 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:13:17.135303  128503 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0420 01:13:17.159616  128503 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:13:17.180598  128503 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0420 01:13:17.200131  128503 ssh_runner.go:195] Run: grep 192.168.61.217	control-plane.minikube.internal$ /etc/hosts
	I0420 01:13:17.204823  128503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:13:17.221840  128503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:13:17.369827  128503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:13:17.393358  128503 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611 for IP: 192.168.61.217
	I0420 01:13:17.393385  128503 certs.go:194] generating shared ca certs ...
	I0420 01:13:17.393407  128503 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:17.393609  128503 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:13:17.393668  128503 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:13:17.393680  128503 certs.go:256] generating profile certs ...
	I0420 01:13:17.393750  128503 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.key
	I0420 01:13:17.393768  128503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt with IP's: []
	I0420 01:13:17.629710  128503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt ...
	I0420 01:13:17.629740  128503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: {Name:mk35c424922a90e514b5c4922df18a830bad5a64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:17.629905  128503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.key ...
	I0420 01:13:17.629919  128503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.key: {Name:mka5f595d465401a5db29219717dd4f350639128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:17.630000  128503 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/apiserver.key.9e125d1e
	I0420 01:13:17.630015  128503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/apiserver.crt.9e125d1e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.217]
	I0420 01:13:17.699411  128503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/apiserver.crt.9e125d1e ...
	I0420 01:13:17.699442  128503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/apiserver.crt.9e125d1e: {Name:mkd0b231d10c9a748ffb12d8cd678e7719050228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:17.699617  128503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/apiserver.key.9e125d1e ...
	I0420 01:13:17.699640  128503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/apiserver.key.9e125d1e: {Name:mk91f3c04c99a516eaf69a24fdafaba8c7055609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:17.699732  128503 certs.go:381] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/apiserver.crt.9e125d1e -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/apiserver.crt
	I0420 01:13:17.699827  128503 certs.go:385] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/apiserver.key.9e125d1e -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/apiserver.key
	I0420 01:13:17.699888  128503 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/proxy-client.key
	I0420 01:13:17.699904  128503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/proxy-client.crt with IP's: []
	I0420 01:13:17.757397  128503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/proxy-client.crt ...
	I0420 01:13:17.757429  128503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/proxy-client.crt: {Name:mk0562eec89abcd8c92cfefacbcc1553b1a0f893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:17.757583  128503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/proxy-client.key ...
	I0420 01:13:17.757596  128503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/proxy-client.key: {Name:mk6759d05e12ffc8d1fcc175fd7d31a75ab6ddd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:17.757759  128503 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:13:17.757798  128503 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:13:17.757808  128503 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:13:17.757828  128503 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:13:17.757851  128503 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:13:17.757871  128503 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:13:17.757906  128503 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:13:17.758539  128503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:13:17.789367  128503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:13:17.819183  128503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:13:17.847264  128503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:13:17.878329  128503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0420 01:13:17.909830  128503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0420 01:13:17.938480  128503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:13:17.968958  128503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:13:18.004414  128503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:13:18.039739  128503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:13:18.069226  128503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:13:18.102303  128503 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:13:18.135092  128503 ssh_runner.go:195] Run: openssl version
	I0420 01:13:18.142657  128503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:13:18.167025  128503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:13:18.176306  128503 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:13:18.176379  128503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:13:18.183771  128503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:13:18.199207  128503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:13:18.213751  128503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:13:18.219656  128503 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:13:18.219730  128503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:13:18.226984  128503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:13:18.241463  128503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:13:18.259792  128503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:13:18.265613  128503 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:13:18.265679  128503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:13:18.272783  128503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:13:18.286985  128503 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:13:18.292734  128503 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0420 01:13:18.292786  128503 kubeadm.go:391] StartCluster: {Name:kindnet-831611 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-831611 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:13:18.292865  128503 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:13:18.292906  128503 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:13:18.340400  128503 cri.go:89] found id: ""
	I0420 01:13:18.340466  128503 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0420 01:13:18.353896  128503 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:13:18.366779  128503 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:13:18.379167  128503 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:13:18.379191  128503 kubeadm.go:156] found existing configuration files:
	
	I0420 01:13:18.379257  128503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:13:18.390893  128503 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:13:18.390963  128503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:13:18.409018  128503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:13:18.426672  128503 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:13:18.426743  128503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:13:18.444599  128503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:13:18.455808  128503 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:13:18.455873  128503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:13:18.468513  128503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:13:18.479115  128503 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:13:18.479180  128503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:13:18.491961  128503 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:13:18.554878  128503 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:13:18.554972  128503 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:13:18.687255  128503 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:13:18.687395  128503 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:13:18.687537  128503 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:13:18.967305  128503 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:13:14.117966  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:14.118559  129976 main.go:141] libmachine: (flannel-831611) DBG | unable to find current IP address of domain flannel-831611 in network mk-flannel-831611
	I0420 01:13:14.118602  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:14.118516  130497 retry.go:31] will retry after 581.211548ms: waiting for machine to come up
	I0420 01:13:14.702194  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:14.702877  129976 main.go:141] libmachine: (flannel-831611) DBG | unable to find current IP address of domain flannel-831611 in network mk-flannel-831611
	I0420 01:13:14.702914  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:14.702826  130497 retry.go:31] will retry after 836.900074ms: waiting for machine to come up
	I0420 01:13:15.541284  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:15.541734  129976 main.go:141] libmachine: (flannel-831611) DBG | unable to find current IP address of domain flannel-831611 in network mk-flannel-831611
	I0420 01:13:15.541764  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:15.541679  130497 retry.go:31] will retry after 1.295275592s: waiting for machine to come up
	I0420 01:13:16.839220  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:16.839758  129976 main.go:141] libmachine: (flannel-831611) DBG | unable to find current IP address of domain flannel-831611 in network mk-flannel-831611
	I0420 01:13:16.839816  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:16.839726  130497 retry.go:31] will retry after 1.296791677s: waiting for machine to come up
	I0420 01:13:18.137906  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:18.138480  129976 main.go:141] libmachine: (flannel-831611) DBG | unable to find current IP address of domain flannel-831611 in network mk-flannel-831611
	I0420 01:13:18.138513  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:18.138431  130497 retry.go:31] will retry after 1.594436356s: waiting for machine to come up
	I0420 01:13:19.022081  128503 out.go:204]   - Generating certificates and keys ...
	I0420 01:13:19.022239  128503 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:13:19.022350  128503 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:13:19.235658  128503 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0420 01:13:19.583062  128503 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0420 01:13:20.040033  128503 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0420 01:13:20.115493  128503 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0420 01:13:20.369137  128503 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0420 01:13:20.369490  128503 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kindnet-831611 localhost] and IPs [192.168.61.217 127.0.0.1 ::1]
	I0420 01:13:20.471163  128503 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0420 01:13:20.471378  128503 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kindnet-831611 localhost] and IPs [192.168.61.217 127.0.0.1 ::1]
	I0420 01:13:20.660167  128503 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0420 01:13:20.782107  128503 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0420 01:13:20.974242  128503 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0420 01:13:20.974503  128503 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:13:21.190521  128503 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:13:21.306167  128503 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:13:21.445454  128503 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:13:21.736020  128503 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:13:21.811744  128503 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:13:21.815399  128503 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:13:21.816474  128503 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:13:19.274342  128374 pod_ready.go:102] pod "coredns-7db6d8ff4d-l8ghq" in "kube-system" namespace has status "Ready":"False"
	I0420 01:13:21.656654  128374 pod_ready.go:102] pod "coredns-7db6d8ff4d-l8ghq" in "kube-system" namespace has status "Ready":"False"
	I0420 01:13:19.734585  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:19.735160  129976 main.go:141] libmachine: (flannel-831611) DBG | unable to find current IP address of domain flannel-831611 in network mk-flannel-831611
	I0420 01:13:19.735205  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:19.735135  130497 retry.go:31] will retry after 2.369460765s: waiting for machine to come up
	I0420 01:13:22.106605  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:22.107185  129976 main.go:141] libmachine: (flannel-831611) DBG | unable to find current IP address of domain flannel-831611 in network mk-flannel-831611
	I0420 01:13:22.107217  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:22.107128  130497 retry.go:31] will retry after 3.087897751s: waiting for machine to come up
	I0420 01:13:21.819299  128503 out.go:204]   - Booting up control plane ...
	I0420 01:13:21.819431  128503 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:13:21.819541  128503 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:13:21.819643  128503 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:13:21.839796  128503 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:13:21.841095  128503 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:13:21.841172  128503 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:13:22.002849  128503 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:13:22.002973  128503 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:13:23.010461  128503 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.007777736s
	I0420 01:13:23.010584  128503 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:13:24.153354  128374 pod_ready.go:102] pod "coredns-7db6d8ff4d-l8ghq" in "kube-system" namespace has status "Ready":"False"
	I0420 01:13:24.653000  128374 pod_ready.go:97] pod "coredns-7db6d8ff4d-l8ghq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-20 01:13:24 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-20 01:13:12 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-20 01:13:12 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-20 01:13:12 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-20 01:13:12 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.125 HostIPs:[{IP:192.168.39
.125}] PodIP: PodIPs:[] StartTime:2024-04-20 01:13:12 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-20 01:13:14 +0000 UTC,FinishedAt:2024-04-20 01:13:24 +0000 UTC,ContainerID:cri-o://c2b497264273648c986a8a043039496ded6ffcfe309c836c5b56e38b9b13878f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://c2b497264273648c986a8a043039496ded6ffcfe309c836c5b56e38b9b13878f Started:0xc002721750 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0420 01:13:24.653040  128374 pod_ready.go:81] duration metric: took 11.508070817s for pod "coredns-7db6d8ff4d-l8ghq" in "kube-system" namespace to be "Ready" ...
	E0420 01:13:24.653056  128374 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-l8ghq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-20 01:13:24 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-20 01:13:12 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-20 01:13:12 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-20 01:13:12 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-20 01:13:12 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.125 HostIPs:[{IP:192.168.39.125}] PodIP: PodIPs:[] StartTime:2024-04-20 01:13:12 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-20 01:13:14 +0000 UTC,FinishedAt:2024-04-20 01:13:24 +0000 UTC,ContainerID:cri-o://c2b497264273648c986a8a043039496ded6ffcfe309c836c5b56e38b9b13878f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://c2b497264273648c986a8a043039496ded6ffcfe309c836c5b56e38b9b13878f Started:0xc002721750 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0420 01:13:24.653068  128374 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-thrzj" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:26.661430  128374 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrzj" in "kube-system" namespace has status "Ready":"False"
	I0420 01:13:28.519925  128503 kubeadm.go:309] [api-check] The API server is healthy after 5.510158951s
	I0420 01:13:28.535713  128503 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:13:28.550394  128503 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:13:28.581952  128503 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:13:28.582215  128503 kubeadm.go:309] [mark-control-plane] Marking the node kindnet-831611 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:13:28.593123  128503 kubeadm.go:309] [bootstrap-token] Using token: j24rr7.6o1nu5zm2vsohof6
	I0420 01:13:25.197342  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:25.197999  129976 main.go:141] libmachine: (flannel-831611) DBG | unable to find current IP address of domain flannel-831611 in network mk-flannel-831611
	I0420 01:13:25.198040  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:25.197914  130497 retry.go:31] will retry after 3.326977853s: waiting for machine to come up
	I0420 01:13:28.526571  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:28.527043  129976 main.go:141] libmachine: (flannel-831611) DBG | unable to find current IP address of domain flannel-831611 in network mk-flannel-831611
	I0420 01:13:28.527070  129976 main.go:141] libmachine: (flannel-831611) DBG | I0420 01:13:28.527000  130497 retry.go:31] will retry after 5.255216887s: waiting for machine to come up
	I0420 01:13:28.594743  128503 out.go:204]   - Configuring RBAC rules ...
	I0420 01:13:28.594852  128503 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:13:28.603431  128503 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:13:28.612059  128503 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:13:28.616176  128503 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:13:28.620403  128503 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:13:28.629281  128503 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:13:28.928209  128503 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:13:29.367642  128503 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:13:29.933822  128503 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:13:29.934911  128503 kubeadm.go:309] 
	I0420 01:13:29.934994  128503 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:13:29.935009  128503 kubeadm.go:309] 
	I0420 01:13:29.935115  128503 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:13:29.935124  128503 kubeadm.go:309] 
	I0420 01:13:29.935166  128503 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:13:29.935251  128503 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:13:29.935370  128503 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:13:29.935388  128503 kubeadm.go:309] 
	I0420 01:13:29.935469  128503 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:13:29.935479  128503 kubeadm.go:309] 
	I0420 01:13:29.935546  128503 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:13:29.935556  128503 kubeadm.go:309] 
	I0420 01:13:29.935632  128503 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:13:29.935703  128503 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:13:29.935797  128503 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:13:29.935815  128503 kubeadm.go:309] 
	I0420 01:13:29.935938  128503 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:13:29.936052  128503 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:13:29.936062  128503 kubeadm.go:309] 
	I0420 01:13:29.936171  128503 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token j24rr7.6o1nu5zm2vsohof6 \
	I0420 01:13:29.936311  128503 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:13:29.936346  128503 kubeadm.go:309] 	--control-plane 
	I0420 01:13:29.936374  128503 kubeadm.go:309] 
	I0420 01:13:29.936482  128503 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:13:29.936491  128503 kubeadm.go:309] 
	I0420 01:13:29.936613  128503 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token j24rr7.6o1nu5zm2vsohof6 \
	I0420 01:13:29.936751  128503 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:13:29.937078  128503 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:13:29.937255  128503 cni.go:84] Creating CNI manager for "kindnet"
	I0420 01:13:29.938873  128503 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0420 01:13:29.160744  128374 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrzj" in "kube-system" namespace has status "Ready":"False"
	I0420 01:13:31.162153  128374 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrzj" in "kube-system" namespace has status "Ready":"False"
	I0420 01:13:33.786356  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:33.786919  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has current primary IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:33.786980  129976 main.go:141] libmachine: (flannel-831611) Found IP for machine: 192.168.72.89
	I0420 01:13:33.787004  129976 main.go:141] libmachine: (flannel-831611) Reserving static IP address...
	I0420 01:13:33.787273  129976 main.go:141] libmachine: (flannel-831611) DBG | unable to find host DHCP lease matching {name: "flannel-831611", mac: "52:54:00:2b:d9:87", ip: "192.168.72.89"} in network mk-flannel-831611
	I0420 01:13:33.864152  129976 main.go:141] libmachine: (flannel-831611) Reserved static IP address: 192.168.72.89
	I0420 01:13:33.864184  129976 main.go:141] libmachine: (flannel-831611) Waiting for SSH to be available...
	I0420 01:13:33.864194  129976 main.go:141] libmachine: (flannel-831611) DBG | Getting to WaitForSSH function...
	I0420 01:13:33.866800  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:33.867108  129976 main.go:141] libmachine: (flannel-831611) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611
	I0420 01:13:33.867139  129976 main.go:141] libmachine: (flannel-831611) DBG | unable to find defined IP address of network mk-flannel-831611 interface with MAC address 52:54:00:2b:d9:87
	I0420 01:13:33.867284  129976 main.go:141] libmachine: (flannel-831611) DBG | Using SSH client type: external
	I0420 01:13:33.867308  129976 main.go:141] libmachine: (flannel-831611) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/flannel-831611/id_rsa (-rw-------)
	I0420 01:13:33.867557  129976 main.go:141] libmachine: (flannel-831611) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/flannel-831611/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:13:33.867580  129976 main.go:141] libmachine: (flannel-831611) DBG | About to run SSH command:
	I0420 01:13:33.867602  129976 main.go:141] libmachine: (flannel-831611) DBG | exit 0
	I0420 01:13:33.871698  129976 main.go:141] libmachine: (flannel-831611) DBG | SSH cmd err, output: exit status 255: 
	I0420 01:13:33.871724  129976 main.go:141] libmachine: (flannel-831611) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0420 01:13:33.871735  129976 main.go:141] libmachine: (flannel-831611) DBG | command : exit 0
	I0420 01:13:33.871744  129976 main.go:141] libmachine: (flannel-831611) DBG | err     : exit status 255
	I0420 01:13:33.871753  129976 main.go:141] libmachine: (flannel-831611) DBG | output  : 
	I0420 01:13:29.940450  128503 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0420 01:13:29.947773  128503 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0420 01:13:29.947802  128503 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0420 01:13:29.973098  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0420 01:13:30.277033  128503 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:13:30.277118  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:30.277144  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-831611 minikube.k8s.io/updated_at=2024_04_20T01_13_30_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=kindnet-831611 minikube.k8s.io/primary=true
	I0420 01:13:30.443927  128503 ops.go:34] apiserver oom_adj: -16
	I0420 01:13:30.444128  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:30.945181  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:31.444959  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:31.945136  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:32.445040  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:32.945197  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:33.444557  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:33.944839  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:34.444974  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:33.660802  128374 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrzj" in "kube-system" namespace has status "Ready":"False"
	I0420 01:13:36.159712  128374 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrzj" in "kube-system" namespace has status "Ready":"False"
	I0420 01:13:38.161393  128374 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrzj" in "kube-system" namespace has status "Ready":"False"
	I0420 01:13:38.391802  130085 start.go:364] duration metric: took 1m9.773418729s to acquireMachinesLock for "kubernetes-upgrade-345460"
	I0420 01:13:38.391871  130085 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:13:38.391883  130085 fix.go:54] fixHost starting: 
	I0420 01:13:38.393757  130085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:13:38.393888  130085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:13:38.411705  130085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39217
	I0420 01:13:38.412278  130085 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:13:38.412858  130085 main.go:141] libmachine: Using API Version  1
	I0420 01:13:38.412889  130085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:13:38.413302  130085 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:13:38.413511  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:13:38.413697  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetState
	I0420 01:13:38.415510  130085 fix.go:112] recreateIfNeeded on kubernetes-upgrade-345460: state=Running err=<nil>
	W0420 01:13:38.415528  130085 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:13:38.417490  130085 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-345460" VM ...
	I0420 01:13:38.418757  130085 machine.go:94] provisionDockerMachine start ...
	I0420 01:13:38.418792  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:13:38.419043  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:13:38.422135  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:38.422971  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:12:00 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:13:38.423004  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:38.423141  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:13:38.423299  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:13:38.423521  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:13:38.423665  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:13:38.423854  130085 main.go:141] libmachine: Using SSH client type: native
	I0420 01:13:38.424118  130085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I0420 01:13:38.424137  130085 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:13:36.872851  129976 main.go:141] libmachine: (flannel-831611) DBG | Getting to WaitForSSH function...
	I0420 01:13:36.875527  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:36.876008  129976 main.go:141] libmachine: (flannel-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611: {Iface:virbr3 ExpiryTime:2024-04-20 02:13:28 +0000 UTC Type:0 Mac:52:54:00:2b:d9:87 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-831611 Clientid:01:52:54:00:2b:d9:87}
	I0420 01:13:36.876037  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:36.876237  129976 main.go:141] libmachine: (flannel-831611) DBG | Using SSH client type: external
	I0420 01:13:36.876266  129976 main.go:141] libmachine: (flannel-831611) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/flannel-831611/id_rsa (-rw-------)
	I0420 01:13:36.876297  129976 main.go:141] libmachine: (flannel-831611) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/flannel-831611/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:13:36.876314  129976 main.go:141] libmachine: (flannel-831611) DBG | About to run SSH command:
	I0420 01:13:36.876330  129976 main.go:141] libmachine: (flannel-831611) DBG | exit 0
	I0420 01:13:37.003107  129976 main.go:141] libmachine: (flannel-831611) DBG | SSH cmd err, output: <nil>: 
	I0420 01:13:37.003457  129976 main.go:141] libmachine: (flannel-831611) KVM machine creation complete!
	I0420 01:13:37.003834  129976 main.go:141] libmachine: (flannel-831611) Calling .GetConfigRaw
	I0420 01:13:37.004551  129976 main.go:141] libmachine: (flannel-831611) Calling .DriverName
	I0420 01:13:37.004800  129976 main.go:141] libmachine: (flannel-831611) Calling .DriverName
	I0420 01:13:37.004989  129976 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0420 01:13:37.005004  129976 main.go:141] libmachine: (flannel-831611) Calling .GetState
	I0420 01:13:37.006568  129976 main.go:141] libmachine: Detecting operating system of created instance...
	I0420 01:13:37.006586  129976 main.go:141] libmachine: Waiting for SSH to be available...
	I0420 01:13:37.006592  129976 main.go:141] libmachine: Getting to WaitForSSH function...
	I0420 01:13:37.006599  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHHostname
	I0420 01:13:37.009063  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:37.009575  129976 main.go:141] libmachine: (flannel-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611: {Iface:virbr3 ExpiryTime:2024-04-20 02:13:28 +0000 UTC Type:0 Mac:52:54:00:2b:d9:87 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-831611 Clientid:01:52:54:00:2b:d9:87}
	I0420 01:13:37.009605  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:37.009755  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHPort
	I0420 01:13:37.010030  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHKeyPath
	I0420 01:13:37.010244  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHKeyPath
	I0420 01:13:37.010399  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHUsername
	I0420 01:13:37.010596  129976 main.go:141] libmachine: Using SSH client type: native
	I0420 01:13:37.010840  129976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:13:37.010852  129976 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0420 01:13:37.117495  129976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:13:37.117525  129976 main.go:141] libmachine: Detecting the provisioner...
	I0420 01:13:37.117532  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHHostname
	I0420 01:13:37.120413  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:37.120849  129976 main.go:141] libmachine: (flannel-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611: {Iface:virbr3 ExpiryTime:2024-04-20 02:13:28 +0000 UTC Type:0 Mac:52:54:00:2b:d9:87 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-831611 Clientid:01:52:54:00:2b:d9:87}
	I0420 01:13:37.120891  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:37.121071  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHPort
	I0420 01:13:37.121294  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHKeyPath
	I0420 01:13:37.121460  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHKeyPath
	I0420 01:13:37.121628  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHUsername
	I0420 01:13:37.121812  129976 main.go:141] libmachine: Using SSH client type: native
	I0420 01:13:37.122017  129976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:13:37.122031  129976 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0420 01:13:37.227671  129976 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0420 01:13:37.227732  129976 main.go:141] libmachine: found compatible host: buildroot
	I0420 01:13:37.227739  129976 main.go:141] libmachine: Provisioning with buildroot...
	I0420 01:13:37.227747  129976 main.go:141] libmachine: (flannel-831611) Calling .GetMachineName
	I0420 01:13:37.228016  129976 buildroot.go:166] provisioning hostname "flannel-831611"
	I0420 01:13:37.228040  129976 main.go:141] libmachine: (flannel-831611) Calling .GetMachineName
	I0420 01:13:37.228262  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHHostname
	I0420 01:13:37.230913  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:37.231274  129976 main.go:141] libmachine: (flannel-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611: {Iface:virbr3 ExpiryTime:2024-04-20 02:13:28 +0000 UTC Type:0 Mac:52:54:00:2b:d9:87 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-831611 Clientid:01:52:54:00:2b:d9:87}
	I0420 01:13:37.231294  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:37.231543  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHPort
	I0420 01:13:37.231730  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHKeyPath
	I0420 01:13:37.231936  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHKeyPath
	I0420 01:13:37.232080  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHUsername
	I0420 01:13:37.232274  129976 main.go:141] libmachine: Using SSH client type: native
	I0420 01:13:37.232491  129976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:13:37.232506  129976 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-831611 && echo "flannel-831611" | sudo tee /etc/hostname
	I0420 01:13:37.355896  129976 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-831611
	
	I0420 01:13:37.355925  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHHostname
	I0420 01:13:37.358907  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:37.359307  129976 main.go:141] libmachine: (flannel-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611: {Iface:virbr3 ExpiryTime:2024-04-20 02:13:28 +0000 UTC Type:0 Mac:52:54:00:2b:d9:87 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-831611 Clientid:01:52:54:00:2b:d9:87}
	I0420 01:13:37.359335  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:37.359523  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHPort
	I0420 01:13:37.359745  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHKeyPath
	I0420 01:13:37.359927  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHKeyPath
	I0420 01:13:37.360110  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHUsername
	I0420 01:13:37.360324  129976 main.go:141] libmachine: Using SSH client type: native
	I0420 01:13:37.360576  129976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:13:37.360602  129976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-831611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-831611/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-831611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:13:37.479871  129976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:13:37.479902  129976 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:13:37.479959  129976 buildroot.go:174] setting up certificates
	I0420 01:13:37.479976  129976 provision.go:84] configureAuth start
	I0420 01:13:37.479995  129976 main.go:141] libmachine: (flannel-831611) Calling .GetMachineName
	I0420 01:13:37.480342  129976 main.go:141] libmachine: (flannel-831611) Calling .GetIP
	I0420 01:13:37.483377  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:37.483864  129976 main.go:141] libmachine: (flannel-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611: {Iface:virbr3 ExpiryTime:2024-04-20 02:13:28 +0000 UTC Type:0 Mac:52:54:00:2b:d9:87 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-831611 Clientid:01:52:54:00:2b:d9:87}
	I0420 01:13:37.483893  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:37.484037  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHHostname
	I0420 01:13:37.486469  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:37.486851  129976 main.go:141] libmachine: (flannel-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611: {Iface:virbr3 ExpiryTime:2024-04-20 02:13:28 +0000 UTC Type:0 Mac:52:54:00:2b:d9:87 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-831611 Clientid:01:52:54:00:2b:d9:87}
	I0420 01:13:37.486899  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:37.487074  129976 provision.go:143] copyHostCerts
	I0420 01:13:37.487135  129976 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:13:37.487149  129976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:13:37.487209  129976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:13:37.487336  129976 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:13:37.487348  129976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:13:37.487379  129976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:13:37.487472  129976 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:13:37.487483  129976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:13:37.487518  129976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:13:37.487628  129976 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.flannel-831611 san=[127.0.0.1 192.168.72.89 flannel-831611 localhost minikube]
	I0420 01:13:37.650715  129976 provision.go:177] copyRemoteCerts
	I0420 01:13:37.650783  129976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:13:37.650812  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHHostname
	I0420 01:13:37.653487  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:37.653872  129976 main.go:141] libmachine: (flannel-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611: {Iface:virbr3 ExpiryTime:2024-04-20 02:13:28 +0000 UTC Type:0 Mac:52:54:00:2b:d9:87 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-831611 Clientid:01:52:54:00:2b:d9:87}
	I0420 01:13:37.653903  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:37.654167  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHPort
	I0420 01:13:37.654359  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHKeyPath
	I0420 01:13:37.654521  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHUsername
	I0420 01:13:37.654649  129976 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/flannel-831611/id_rsa Username:docker}
	I0420 01:13:37.746050  129976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0420 01:13:37.776419  129976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 01:13:37.805532  129976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:13:37.831675  129976 provision.go:87] duration metric: took 351.682111ms to configureAuth
	I0420 01:13:37.831698  129976 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:13:37.831868  129976 config.go:182] Loaded profile config "flannel-831611": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:13:37.831957  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHHostname
	I0420 01:13:37.834859  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:37.835215  129976 main.go:141] libmachine: (flannel-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611: {Iface:virbr3 ExpiryTime:2024-04-20 02:13:28 +0000 UTC Type:0 Mac:52:54:00:2b:d9:87 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-831611 Clientid:01:52:54:00:2b:d9:87}
	I0420 01:13:37.835243  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:37.835404  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHPort
	I0420 01:13:37.835588  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHKeyPath
	I0420 01:13:37.835783  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHKeyPath
	I0420 01:13:37.835933  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHUsername
	I0420 01:13:37.836096  129976 main.go:141] libmachine: Using SSH client type: native
	I0420 01:13:37.836309  129976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:13:37.836327  129976 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:13:38.133487  129976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:13:38.133521  129976 main.go:141] libmachine: Checking connection to Docker...
	I0420 01:13:38.133532  129976 main.go:141] libmachine: (flannel-831611) Calling .GetURL
	I0420 01:13:38.134939  129976 main.go:141] libmachine: (flannel-831611) DBG | Using libvirt version 6000000
	I0420 01:13:38.137538  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:38.137993  129976 main.go:141] libmachine: (flannel-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611: {Iface:virbr3 ExpiryTime:2024-04-20 02:13:28 +0000 UTC Type:0 Mac:52:54:00:2b:d9:87 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-831611 Clientid:01:52:54:00:2b:d9:87}
	I0420 01:13:38.138023  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:38.138238  129976 main.go:141] libmachine: Docker is up and running!
	I0420 01:13:38.138258  129976 main.go:141] libmachine: Reticulating splines...
	I0420 01:13:38.138267  129976 client.go:171] duration metric: took 28.017536467s to LocalClient.Create
	I0420 01:13:38.138307  129976 start.go:167] duration metric: took 28.017633173s to libmachine.API.Create "flannel-831611"
	I0420 01:13:38.138359  129976 start.go:293] postStartSetup for "flannel-831611" (driver="kvm2")
	I0420 01:13:38.138384  129976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:13:38.138408  129976 main.go:141] libmachine: (flannel-831611) Calling .DriverName
	I0420 01:13:38.138675  129976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:13:38.138703  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHHostname
	I0420 01:13:38.141095  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:38.141498  129976 main.go:141] libmachine: (flannel-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611: {Iface:virbr3 ExpiryTime:2024-04-20 02:13:28 +0000 UTC Type:0 Mac:52:54:00:2b:d9:87 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-831611 Clientid:01:52:54:00:2b:d9:87}
	I0420 01:13:38.141511  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:38.141692  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHPort
	I0420 01:13:38.141861  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHKeyPath
	I0420 01:13:38.142063  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHUsername
	I0420 01:13:38.142230  129976 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/flannel-831611/id_rsa Username:docker}
	I0420 01:13:38.228021  129976 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:13:38.233103  129976 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:13:38.233142  129976 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:13:38.233226  129976 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:13:38.233302  129976 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:13:38.233415  129976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:13:38.244700  129976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:13:38.272388  129976 start.go:296] duration metric: took 133.996077ms for postStartSetup
	I0420 01:13:38.272437  129976 main.go:141] libmachine: (flannel-831611) Calling .GetConfigRaw
	I0420 01:13:38.273010  129976 main.go:141] libmachine: (flannel-831611) Calling .GetIP
	I0420 01:13:38.276045  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:38.276394  129976 main.go:141] libmachine: (flannel-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611: {Iface:virbr3 ExpiryTime:2024-04-20 02:13:28 +0000 UTC Type:0 Mac:52:54:00:2b:d9:87 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-831611 Clientid:01:52:54:00:2b:d9:87}
	I0420 01:13:38.276437  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:38.276708  129976 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/config.json ...
	I0420 01:13:38.276893  129976 start.go:128] duration metric: took 28.177851726s to createHost
	I0420 01:13:38.276920  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHHostname
	I0420 01:13:38.279307  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:38.279653  129976 main.go:141] libmachine: (flannel-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611: {Iface:virbr3 ExpiryTime:2024-04-20 02:13:28 +0000 UTC Type:0 Mac:52:54:00:2b:d9:87 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-831611 Clientid:01:52:54:00:2b:d9:87}
	I0420 01:13:38.279685  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:38.279862  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHPort
	I0420 01:13:38.280082  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHKeyPath
	I0420 01:13:38.280247  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHKeyPath
	I0420 01:13:38.280381  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHUsername
	I0420 01:13:38.280568  129976 main.go:141] libmachine: Using SSH client type: native
	I0420 01:13:38.280730  129976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:13:38.280741  129976 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:13:38.391625  129976 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713575618.325362180
	
	I0420 01:13:38.391652  129976 fix.go:216] guest clock: 1713575618.325362180
	I0420 01:13:38.391662  129976 fix.go:229] Guest: 2024-04-20 01:13:38.32536218 +0000 UTC Remote: 2024-04-20 01:13:38.27690883 +0000 UTC m=+74.253881113 (delta=48.45335ms)
	I0420 01:13:38.391686  129976 fix.go:200] guest clock delta is within tolerance: 48.45335ms
	I0420 01:13:38.391693  129976 start.go:83] releasing machines lock for "flannel-831611", held for 28.292871365s
	I0420 01:13:38.391740  129976 main.go:141] libmachine: (flannel-831611) Calling .DriverName
	I0420 01:13:38.392074  129976 main.go:141] libmachine: (flannel-831611) Calling .GetIP
	I0420 01:13:38.396664  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:38.397120  129976 main.go:141] libmachine: (flannel-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611: {Iface:virbr3 ExpiryTime:2024-04-20 02:13:28 +0000 UTC Type:0 Mac:52:54:00:2b:d9:87 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-831611 Clientid:01:52:54:00:2b:d9:87}
	I0420 01:13:38.397150  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:38.397337  129976 main.go:141] libmachine: (flannel-831611) Calling .DriverName
	I0420 01:13:38.397934  129976 main.go:141] libmachine: (flannel-831611) Calling .DriverName
	I0420 01:13:38.398149  129976 main.go:141] libmachine: (flannel-831611) Calling .DriverName
	I0420 01:13:38.398292  129976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:13:38.398338  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHHostname
	I0420 01:13:38.398381  129976 ssh_runner.go:195] Run: cat /version.json
	I0420 01:13:38.398406  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHHostname
	I0420 01:13:38.401206  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:38.401510  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:38.401679  129976 main.go:141] libmachine: (flannel-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611: {Iface:virbr3 ExpiryTime:2024-04-20 02:13:28 +0000 UTC Type:0 Mac:52:54:00:2b:d9:87 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-831611 Clientid:01:52:54:00:2b:d9:87}
	I0420 01:13:38.401714  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:38.401978  129976 main.go:141] libmachine: (flannel-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611: {Iface:virbr3 ExpiryTime:2024-04-20 02:13:28 +0000 UTC Type:0 Mac:52:54:00:2b:d9:87 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-831611 Clientid:01:52:54:00:2b:d9:87}
	I0420 01:13:38.402002  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:38.402005  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHPort
	I0420 01:13:38.402210  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHKeyPath
	I0420 01:13:38.402227  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHPort
	I0420 01:13:38.402403  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHKeyPath
	I0420 01:13:38.402403  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHUsername
	I0420 01:13:38.402758  129976 main.go:141] libmachine: (flannel-831611) Calling .GetSSHUsername
	I0420 01:13:38.402753  129976 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/flannel-831611/id_rsa Username:docker}
	I0420 01:13:38.402912  129976 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/flannel-831611/id_rsa Username:docker}
	I0420 01:13:38.484198  129976 ssh_runner.go:195] Run: systemctl --version
	I0420 01:13:38.508929  129976 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:13:38.679034  129976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:13:38.687062  129976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:13:38.687133  129976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:13:38.711064  129976 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:13:38.711095  129976 start.go:494] detecting cgroup driver to use...
	I0420 01:13:38.711179  129976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:13:38.733019  129976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:13:38.750903  129976 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:13:38.750981  129976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:13:38.770371  129976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:13:38.790974  129976 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:13:38.969823  129976 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:13:34.945069  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:35.444219  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:35.944310  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:36.444442  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:36.944551  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:37.444722  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:37.944404  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:38.444136  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:38.944915  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:39.444716  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:39.149596  129976 docker.go:233] disabling docker service ...
	I0420 01:13:39.149689  129976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:13:39.170047  129976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:13:39.189000  129976 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:13:39.361915  129976 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:13:39.502180  129976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:13:39.529807  129976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:13:39.556654  129976 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:13:39.556722  129976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:39.569911  129976 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:13:39.570010  129976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:39.585124  129976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:39.599814  129976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:39.613637  129976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:13:39.628297  129976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:39.643658  129976 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:39.667130  129976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:39.680331  129976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:13:39.692118  129976 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:13:39.692174  129976 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:13:39.709832  129976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:13:39.722216  129976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:13:39.880781  129976 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:13:40.042836  129976 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:13:40.042926  129976 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:13:40.048854  129976 start.go:562] Will wait 60s for crictl version
	I0420 01:13:40.048911  129976 ssh_runner.go:195] Run: which crictl
	I0420 01:13:40.053584  129976 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:13:40.094999  129976 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:13:40.095114  129976 ssh_runner.go:195] Run: crio --version
	I0420 01:13:40.131692  129976 ssh_runner.go:195] Run: crio --version
	I0420 01:13:40.167914  129976 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:13:39.944955  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:40.445142  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:40.944374  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:41.444610  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:41.944675  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:42.445087  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:42.944476  128503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:13:43.115578  128503 kubeadm.go:1107] duration metric: took 12.838535335s to wait for elevateKubeSystemPrivileges
	W0420 01:13:43.115622  128503 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:13:43.115634  128503 kubeadm.go:393] duration metric: took 24.82285077s to StartCluster
	I0420 01:13:43.115658  128503 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:43.115742  128503 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:13:43.117436  128503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:43.117696  128503 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.61.217 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:13:43.119481  128503 out.go:177] * Verifying Kubernetes components...
	I0420 01:13:43.117905  128503 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0420 01:13:43.117918  128503 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:13:43.118143  128503 config.go:182] Loaded profile config "kindnet-831611": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:13:43.121165  128503 addons.go:69] Setting storage-provisioner=true in profile "kindnet-831611"
	I0420 01:13:43.121182  128503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:13:43.121214  128503 addons.go:69] Setting default-storageclass=true in profile "kindnet-831611"
	I0420 01:13:43.121259  128503 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-831611"
	I0420 01:13:43.121220  128503 addons.go:234] Setting addon storage-provisioner=true in "kindnet-831611"
	I0420 01:13:43.121360  128503 host.go:66] Checking if "kindnet-831611" exists ...
	I0420 01:13:43.121795  128503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:13:43.121811  128503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:13:43.121820  128503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:13:43.121832  128503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:13:43.143511  128503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37201
	I0420 01:13:43.144020  128503 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:13:43.144491  128503 main.go:141] libmachine: Using API Version  1
	I0420 01:13:43.144511  128503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:13:43.144878  128503 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:13:43.146075  128503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:13:43.146104  128503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:13:43.148537  128503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33433
	I0420 01:13:43.148932  128503 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:13:43.149488  128503 main.go:141] libmachine: Using API Version  1
	I0420 01:13:43.149514  128503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:13:43.149920  128503 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:13:43.150377  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetState
	I0420 01:13:43.154046  128503 addons.go:234] Setting addon default-storageclass=true in "kindnet-831611"
	I0420 01:13:43.154099  128503 host.go:66] Checking if "kindnet-831611" exists ...
	I0420 01:13:43.154407  128503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:13:43.154443  128503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:13:43.171607  128503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43573
	I0420 01:13:43.172110  128503 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:13:43.172728  128503 main.go:141] libmachine: Using API Version  1
	I0420 01:13:43.172750  128503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:13:43.173171  128503 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:13:43.173442  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetState
	I0420 01:13:43.179415  128503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34209
	I0420 01:13:43.179940  128503 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:13:43.180455  128503 main.go:141] libmachine: Using API Version  1
	I0420 01:13:43.180477  128503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:13:43.180782  128503 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:13:43.181408  128503 main.go:141] libmachine: (kindnet-831611) Calling .DriverName
	I0420 01:13:43.181455  128503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:13:43.181500  128503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:13:43.187703  128503 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:13:40.662162  128374 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrzj" in "kube-system" namespace has status "Ready":"False"
	I0420 01:13:42.662631  128374 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrzj" in "kube-system" namespace has status "Ready":"False"
	I0420 01:13:38.536567  130085 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-345460
	
	I0420 01:13:38.536598  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetMachineName
	I0420 01:13:38.537032  130085 buildroot.go:166] provisioning hostname "kubernetes-upgrade-345460"
	I0420 01:13:38.537066  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetMachineName
	I0420 01:13:38.537272  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:13:38.540815  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:38.541243  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:12:00 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:13:38.541277  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:38.541452  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:13:38.541639  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:13:38.541789  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:13:38.541943  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:13:38.542128  130085 main.go:141] libmachine: Using SSH client type: native
	I0420 01:13:38.542357  130085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I0420 01:13:38.542377  130085 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-345460 && echo "kubernetes-upgrade-345460" | sudo tee /etc/hostname
	I0420 01:13:38.673914  130085 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-345460
	
	I0420 01:13:38.673958  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:13:38.676865  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:38.677461  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:12:00 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:13:38.677510  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:38.677800  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:13:38.677999  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:13:38.678245  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:13:38.678410  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:13:38.678568  130085 main.go:141] libmachine: Using SSH client type: native
	I0420 01:13:38.678755  130085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I0420 01:13:38.678782  130085 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-345460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-345460/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-345460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:13:38.791947  130085 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:13:38.791981  130085 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:13:38.792069  130085 buildroot.go:174] setting up certificates
	I0420 01:13:38.792081  130085 provision.go:84] configureAuth start
	I0420 01:13:38.792114  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetMachineName
	I0420 01:13:38.792505  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetIP
	I0420 01:13:38.795685  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:38.796086  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:12:00 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:13:38.796130  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:38.796412  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:13:38.798951  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:38.799312  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:12:00 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:13:38.799346  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:38.799478  130085 provision.go:143] copyHostCerts
	I0420 01:13:38.799540  130085 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:13:38.799552  130085 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:13:38.799614  130085 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:13:38.799749  130085 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:13:38.799762  130085 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:13:38.799800  130085 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:13:38.799887  130085 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:13:38.799897  130085 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:13:38.799924  130085 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:13:38.800063  130085 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-345460 san=[127.0.0.1 192.168.50.68 kubernetes-upgrade-345460 localhost minikube]
	I0420 01:13:38.969169  130085 provision.go:177] copyRemoteCerts
	I0420 01:13:38.969228  130085 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:13:38.969255  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:13:38.972315  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:38.972805  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:12:00 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:13:38.972854  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:38.973222  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:13:38.973494  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:13:38.973706  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:13:38.973879  130085 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460/id_rsa Username:docker}
	I0420 01:13:39.066004  130085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 01:13:39.106517  130085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:13:39.139668  130085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0420 01:13:39.175562  130085 provision.go:87] duration metric: took 383.464189ms to configureAuth
	I0420 01:13:39.175592  130085 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:13:39.175778  130085 config.go:182] Loaded profile config "kubernetes-upgrade-345460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:13:39.175876  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:13:39.179143  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:39.179546  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:12:00 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:13:39.179597  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:39.179745  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:13:39.179956  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:13:39.180185  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:13:39.180370  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:13:39.180643  130085 main.go:141] libmachine: Using SSH client type: native
	I0420 01:13:39.180882  130085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I0420 01:13:39.180910  130085 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:13:40.169016  129976 main.go:141] libmachine: (flannel-831611) Calling .GetIP
	I0420 01:13:40.171861  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:40.172222  129976 main.go:141] libmachine: (flannel-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d9:87", ip: ""} in network mk-flannel-831611: {Iface:virbr3 ExpiryTime:2024-04-20 02:13:28 +0000 UTC Type:0 Mac:52:54:00:2b:d9:87 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-831611 Clientid:01:52:54:00:2b:d9:87}
	I0420 01:13:40.172251  129976 main.go:141] libmachine: (flannel-831611) DBG | domain flannel-831611 has defined IP address 192.168.72.89 and MAC address 52:54:00:2b:d9:87 in network mk-flannel-831611
	I0420 01:13:40.172506  129976 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0420 01:13:40.177230  129976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:13:40.191935  129976 kubeadm.go:877] updating cluster {Name:flannel-831611 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:flannel-831611 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:13:40.192061  129976 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:13:40.192134  129976 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:13:40.238000  129976 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:13:40.238073  129976 ssh_runner.go:195] Run: which lz4
	I0420 01:13:40.243010  129976 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:13:40.248251  129976 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:13:40.248288  129976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:13:41.979541  129976 crio.go:462] duration metric: took 1.736557784s to copy over tarball
	I0420 01:13:41.979665  129976 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:13:43.189816  128503 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:13:43.189840  128503 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:13:43.189868  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHHostname
	I0420 01:13:43.193378  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:43.193840  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:43.193869  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:43.194034  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHPort
	I0420 01:13:43.194208  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:43.194362  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHUsername
	I0420 01:13:43.194508  128503 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611/id_rsa Username:docker}
	I0420 01:13:43.201559  128503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41117
	I0420 01:13:43.202108  128503 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:13:43.202652  128503 main.go:141] libmachine: Using API Version  1
	I0420 01:13:43.202675  128503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:13:43.203049  128503 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:13:43.203228  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetState
	I0420 01:13:43.204807  128503 main.go:141] libmachine: (kindnet-831611) Calling .DriverName
	I0420 01:13:43.205124  128503 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:13:43.205154  128503 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:13:43.205186  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHHostname
	I0420 01:13:43.207800  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:43.208239  128503 main.go:141] libmachine: (kindnet-831611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2a:a5", ip: ""} in network mk-kindnet-831611: {Iface:virbr2 ExpiryTime:2024-04-20 02:12:58 +0000 UTC Type:0 Mac:52:54:00:76:2a:a5 Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:kindnet-831611 Clientid:01:52:54:00:76:2a:a5}
	I0420 01:13:43.208263  128503 main.go:141] libmachine: (kindnet-831611) DBG | domain kindnet-831611 has defined IP address 192.168.61.217 and MAC address 52:54:00:76:2a:a5 in network mk-kindnet-831611
	I0420 01:13:43.208412  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHPort
	I0420 01:13:43.208586  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHKeyPath
	I0420 01:13:43.208787  128503 main.go:141] libmachine: (kindnet-831611) Calling .GetSSHUsername
	I0420 01:13:43.208944  128503 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kindnet-831611/id_rsa Username:docker}
	I0420 01:13:43.545011  128503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:13:43.562885  128503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:13:43.658242  128503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:13:43.658493  128503 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0420 01:13:44.066365  128503 main.go:141] libmachine: Making call to close driver server
	I0420 01:13:44.066395  128503 main.go:141] libmachine: (kindnet-831611) Calling .Close
	I0420 01:13:44.066846  128503 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:13:44.066867  128503 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:13:44.066910  128503 main.go:141] libmachine: (kindnet-831611) DBG | Closing plugin on server side
	I0420 01:13:44.066954  128503 main.go:141] libmachine: Making call to close driver server
	I0420 01:13:44.066971  128503 main.go:141] libmachine: (kindnet-831611) Calling .Close
	I0420 01:13:44.067343  128503 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:13:44.067367  128503 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:13:44.067365  128503 main.go:141] libmachine: (kindnet-831611) DBG | Closing plugin on server side
	I0420 01:13:44.104689  128503 main.go:141] libmachine: Making call to close driver server
	I0420 01:13:44.104714  128503 main.go:141] libmachine: (kindnet-831611) Calling .Close
	I0420 01:13:44.105044  128503 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:13:44.105113  128503 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:13:44.437293  128503 main.go:141] libmachine: Making call to close driver server
	I0420 01:13:44.437324  128503 start.go:946] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0420 01:13:44.438759  128503 node_ready.go:35] waiting up to 15m0s for node "kindnet-831611" to be "Ready" ...
	I0420 01:13:44.437335  128503 main.go:141] libmachine: (kindnet-831611) Calling .Close
	I0420 01:13:44.440040  128503 main.go:141] libmachine: (kindnet-831611) DBG | Closing plugin on server side
	I0420 01:13:44.440087  128503 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:13:44.440095  128503 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:13:44.440104  128503 main.go:141] libmachine: Making call to close driver server
	I0420 01:13:44.440112  128503 main.go:141] libmachine: (kindnet-831611) Calling .Close
	I0420 01:13:44.440512  128503 main.go:141] libmachine: (kindnet-831611) DBG | Closing plugin on server side
	I0420 01:13:44.440574  128503 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:13:44.440601  128503 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:13:44.443109  128503 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0420 01:13:44.444248  128503 addons.go:505] duration metric: took 1.326323005s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0420 01:13:44.666543  128374 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrzj" in "kube-system" namespace has status "Ready":"False"
	I0420 01:13:47.163985  128374 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrzj" in "kube-system" namespace has status "Ready":"False"
	I0420 01:13:45.823474  130085 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:13:45.823506  130085 machine.go:97] duration metric: took 7.404729916s to provisionDockerMachine
	I0420 01:13:45.823523  130085 start.go:293] postStartSetup for "kubernetes-upgrade-345460" (driver="kvm2")
	I0420 01:13:45.823535  130085 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:13:45.823557  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:13:45.823925  130085 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:13:45.823955  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:13:45.826996  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:45.827445  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:12:00 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:13:45.827478  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:45.827617  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:13:45.827818  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:13:45.828054  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:13:45.828210  130085 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460/id_rsa Username:docker}
	I0420 01:13:45.923169  130085 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:13:45.928424  130085 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:13:45.928456  130085 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:13:45.928522  130085 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:13:45.928623  130085 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:13:45.928744  130085 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:13:45.940168  130085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:13:45.973348  130085 start.go:296] duration metric: took 149.807468ms for postStartSetup
	I0420 01:13:45.973387  130085 fix.go:56] duration metric: took 7.581504817s for fixHost
	I0420 01:13:45.973435  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:13:45.976638  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:45.976993  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:12:00 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:13:45.977023  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:45.977270  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:13:45.977501  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:13:45.977695  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:13:45.977847  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:13:45.978049  130085 main.go:141] libmachine: Using SSH client type: native
	I0420 01:13:45.978246  130085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I0420 01:13:45.978276  130085 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:13:46.091509  130085 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713575626.074487786
	
	I0420 01:13:46.091533  130085 fix.go:216] guest clock: 1713575626.074487786
	I0420 01:13:46.091541  130085 fix.go:229] Guest: 2024-04-20 01:13:46.074487786 +0000 UTC Remote: 2024-04-20 01:13:45.973390582 +0000 UTC m=+77.503547113 (delta=101.097204ms)
	I0420 01:13:46.091574  130085 fix.go:200] guest clock delta is within tolerance: 101.097204ms
	I0420 01:13:46.091582  130085 start.go:83] releasing machines lock for "kubernetes-upgrade-345460", held for 7.699744064s
	I0420 01:13:46.091606  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:13:46.091914  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetIP
	I0420 01:13:46.094793  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:46.095219  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:12:00 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:13:46.095254  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:46.095442  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:13:46.095976  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:13:46.096164  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:13:46.096248  130085 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:13:46.096303  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:13:46.096425  130085 ssh_runner.go:195] Run: cat /version.json
	I0420 01:13:46.096460  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHHostname
	I0420 01:13:46.099450  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:46.099648  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:46.099918  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:12:00 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:13:46.099952  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:46.100111  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:13:46.100246  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:12:00 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:13:46.100266  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:46.100296  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:13:46.100445  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHPort
	I0420 01:13:46.100555  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:13:46.100575  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHKeyPath
	I0420 01:13:46.100779  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetSSHUsername
	I0420 01:13:46.100774  130085 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460/id_rsa Username:docker}
	I0420 01:13:46.100917  130085 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/kubernetes-upgrade-345460/id_rsa Username:docker}
	I0420 01:13:46.204057  130085 ssh_runner.go:195] Run: systemctl --version
	I0420 01:13:46.214212  130085 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:13:46.396771  130085 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:13:46.404790  130085 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:13:46.404887  130085 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:13:46.418511  130085 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0420 01:13:46.418575  130085 start.go:494] detecting cgroup driver to use...
	I0420 01:13:46.418672  130085 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:13:46.445759  130085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:13:46.471326  130085 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:13:46.471396  130085 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:13:46.492634  130085 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:13:46.512507  130085 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:13:46.711689  130085 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:13:46.903592  130085 docker.go:233] disabling docker service ...
	I0420 01:13:46.903672  130085 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:13:46.932436  130085 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:13:46.949924  130085 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:13:47.125913  130085 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:13:47.297304  130085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:13:47.314160  130085 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:13:47.341533  130085 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:13:47.341620  130085 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:47.354294  130085 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:13:47.354372  130085 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:47.366711  130085 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:47.381879  130085 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:47.397898  130085 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:13:47.413927  130085 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:47.430171  130085 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:47.446169  130085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:13:47.461811  130085 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:13:47.476052  130085 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:13:47.487595  130085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:13:47.716623  130085 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:13:44.940847  129976 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.961141625s)
	I0420 01:13:44.940880  129976 crio.go:469] duration metric: took 2.961295276s to extract the tarball
	I0420 01:13:44.940889  129976 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:13:44.982122  129976 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:13:45.034349  129976 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:13:45.034394  129976 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:13:45.034403  129976 kubeadm.go:928] updating node { 192.168.72.89 8443 v1.30.0 crio true true} ...
	I0420 01:13:45.034567  129976 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-831611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:flannel-831611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0420 01:13:45.034643  129976 ssh_runner.go:195] Run: crio config
	I0420 01:13:45.098632  129976 cni.go:84] Creating CNI manager for "flannel"
	I0420 01:13:45.098668  129976 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:13:45.098706  129976 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.89 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-831611 NodeName:flannel-831611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:13:45.098918  129976 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-831611"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:13:45.098983  129976 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:13:45.111041  129976 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:13:45.111100  129976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:13:45.121947  129976 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0420 01:13:45.143206  129976 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:13:45.168397  129976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0420 01:13:45.191490  129976 ssh_runner.go:195] Run: grep 192.168.72.89	control-plane.minikube.internal$ /etc/hosts
	I0420 01:13:45.198053  129976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:13:45.218208  129976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:13:45.379143  129976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:13:45.401287  129976 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611 for IP: 192.168.72.89
	I0420 01:13:45.401332  129976 certs.go:194] generating shared ca certs ...
	I0420 01:13:45.401353  129976 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:45.401551  129976 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:13:45.401613  129976 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:13:45.401629  129976 certs.go:256] generating profile certs ...
	I0420 01:13:45.401744  129976 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.key
	I0420 01:13:45.401771  129976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt with IP's: []
	I0420 01:13:45.603103  129976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt ...
	I0420 01:13:45.603136  129976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: {Name:mkf7c16c29491cf10ae348d5d6a9568f5dd3a577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:45.611976  129976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.key ...
	I0420 01:13:45.612010  129976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.key: {Name:mkda619c06cd469e651e9324f6aa5c0822c23587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:45.612183  129976 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/apiserver.key.8269e866
	I0420 01:13:45.612216  129976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/apiserver.crt.8269e866 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.89]
	I0420 01:13:45.843364  129976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/apiserver.crt.8269e866 ...
	I0420 01:13:45.843390  129976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/apiserver.crt.8269e866: {Name:mk80b473342af82e50e62c225e322d9610bde993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:45.843548  129976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/apiserver.key.8269e866 ...
	I0420 01:13:45.843566  129976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/apiserver.key.8269e866: {Name:mk6b90135f0c63b568f3a37f5a319d693ca475a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:45.843664  129976 certs.go:381] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/apiserver.crt.8269e866 -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/apiserver.crt
	I0420 01:13:45.843759  129976 certs.go:385] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/apiserver.key.8269e866 -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/apiserver.key
	I0420 01:13:45.843849  129976 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/proxy-client.key
	I0420 01:13:45.843867  129976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/proxy-client.crt with IP's: []
	I0420 01:13:46.063919  129976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/proxy-client.crt ...
	I0420 01:13:46.063949  129976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/proxy-client.crt: {Name:mkeaca61f03fd1f7968dbaa0dfff264c4cb582e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:46.064121  129976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/proxy-client.key ...
	I0420 01:13:46.064139  129976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/proxy-client.key: {Name:mk8f711c537d8d9641871ef44117757078f54260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:46.064352  129976 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:13:46.064391  129976 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:13:46.064402  129976 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:13:46.064428  129976 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:13:46.064452  129976 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:13:46.064474  129976 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:13:46.064512  129976 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:13:46.065176  129976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:13:46.118126  129976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:13:46.157352  129976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:13:46.193440  129976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:13:46.228914  129976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0420 01:13:46.259230  129976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:13:46.290960  129976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:13:46.319344  129976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0420 01:13:46.347739  129976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:13:46.375828  129976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:13:46.405778  129976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:13:46.433927  129976 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:13:46.456387  129976 ssh_runner.go:195] Run: openssl version
	I0420 01:13:46.463629  129976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:13:46.478891  129976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:13:46.484505  129976 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:13:46.484571  129976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:13:46.492650  129976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:13:46.509224  129976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:13:46.524549  129976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:13:46.530246  129976 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:13:46.530312  129976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:13:46.537425  129976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:13:46.552391  129976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:13:46.567547  129976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:13:46.575515  129976 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:13:46.575599  129976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:13:46.585697  129976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:13:46.607716  129976 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:13:46.612987  129976 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0420 01:13:46.613094  129976 kubeadm.go:391] StartCluster: {Name:flannel-831611 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:flannel-831611 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:13:46.613221  129976 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:13:46.613353  129976 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:13:46.655848  129976 cri.go:89] found id: ""
	I0420 01:13:46.655948  129976 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0420 01:13:46.670620  129976 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:13:46.684071  129976 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:13:46.697955  129976 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:13:46.697981  129976 kubeadm.go:156] found existing configuration files:
	
	I0420 01:13:46.698031  129976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:13:46.711331  129976 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:13:46.711420  129976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:13:46.727286  129976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:13:46.738139  129976 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:13:46.738205  129976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:13:46.750292  129976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:13:46.761761  129976 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:13:46.761819  129976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:13:46.777059  129976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:13:46.789821  129976 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:13:46.789882  129976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:13:46.802355  129976 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:13:47.050946  129976 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:13:44.944074  128503 node_ready.go:49] node "kindnet-831611" has status "Ready":"True"
	I0420 01:13:44.944130  128503 node_ready.go:38] duration metric: took 505.344273ms for node "kindnet-831611" to be "Ready" ...
	I0420 01:13:44.944144  128503 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:13:44.944502  128503 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-831611" context rescaled to 1 replicas
	I0420 01:13:44.953227  128503 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-4gkhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:47.815328  128503 pod_ready.go:92] pod "coredns-7db6d8ff4d-4gkhp" in "kube-system" namespace has status "Ready":"True"
	I0420 01:13:47.815396  128503 pod_ready.go:81] duration metric: took 2.862133509s for pod "coredns-7db6d8ff4d-4gkhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:47.815433  128503 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:47.851041  128503 pod_ready.go:92] pod "etcd-kindnet-831611" in "kube-system" namespace has status "Ready":"True"
	I0420 01:13:47.851385  128503 pod_ready.go:81] duration metric: took 35.928348ms for pod "etcd-kindnet-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:47.851444  128503 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:49.358549  128503 pod_ready.go:92] pod "kube-apiserver-kindnet-831611" in "kube-system" namespace has status "Ready":"True"
	I0420 01:13:49.358577  128503 pod_ready.go:81] duration metric: took 1.507106558s for pod "kube-apiserver-kindnet-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:49.358590  128503 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:49.364654  128503 pod_ready.go:92] pod "kube-controller-manager-kindnet-831611" in "kube-system" namespace has status "Ready":"True"
	I0420 01:13:49.364677  128503 pod_ready.go:81] duration metric: took 6.07944ms for pod "kube-controller-manager-kindnet-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:49.364686  128503 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-pcgnb" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:49.369393  128503 pod_ready.go:92] pod "kube-proxy-pcgnb" in "kube-system" namespace has status "Ready":"True"
	I0420 01:13:49.369415  128503 pod_ready.go:81] duration metric: took 4.721903ms for pod "kube-proxy-pcgnb" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:49.369423  128503 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:49.374082  128503 pod_ready.go:92] pod "kube-scheduler-kindnet-831611" in "kube-system" namespace has status "Ready":"True"
	I0420 01:13:49.374102  128503 pod_ready.go:81] duration metric: took 4.673685ms for pod "kube-scheduler-kindnet-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:49.374111  128503 pod_ready.go:38] duration metric: took 4.429951123s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:13:49.374132  128503 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:13:49.374187  128503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:13:49.391385  128503 api_server.go:72] duration metric: took 6.273650322s to wait for apiserver process to appear ...
	I0420 01:13:49.391415  128503 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:13:49.391450  128503 api_server.go:253] Checking apiserver healthz at https://192.168.61.217:8443/healthz ...
	I0420 01:13:49.395898  128503 api_server.go:279] https://192.168.61.217:8443/healthz returned 200:
	ok
	I0420 01:13:49.397122  128503 api_server.go:141] control plane version: v1.30.0
	I0420 01:13:49.397168  128503 api_server.go:131] duration metric: took 5.744687ms to wait for apiserver health ...
	I0420 01:13:49.397178  128503 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:13:49.403402  128503 system_pods.go:59] 8 kube-system pods found
	I0420 01:13:49.403440  128503 system_pods.go:61] "coredns-7db6d8ff4d-4gkhp" [e6e4077c-5116-41eb-bb9d-8a4f0e382f90] Running
	I0420 01:13:49.403447  128503 system_pods.go:61] "etcd-kindnet-831611" [a18e4e12-fbd5-4dcc-b5d1-0b80a9d25cd4] Running
	I0420 01:13:49.403451  128503 system_pods.go:61] "kindnet-n7m4d" [61e46fed-c489-4040-8e44-3064d4d13ebe] Running
	I0420 01:13:49.403454  128503 system_pods.go:61] "kube-apiserver-kindnet-831611" [d198078a-371f-4ad3-9852-c59fd15eb568] Running
	I0420 01:13:49.403457  128503 system_pods.go:61] "kube-controller-manager-kindnet-831611" [0e29d6eb-126a-4e97-a466-c23efb454c9d] Running
	I0420 01:13:49.403460  128503 system_pods.go:61] "kube-proxy-pcgnb" [e4a54e64-90ab-4cbc-8f39-aab327f6a793] Running
	I0420 01:13:49.403463  128503 system_pods.go:61] "kube-scheduler-kindnet-831611" [4e5ea430-2fcc-4278-9386-d84d3efba723] Running
	I0420 01:13:49.403465  128503 system_pods.go:61] "storage-provisioner" [1737ec70-fba9-43a5-a4f9-4f4e587f2801] Running
	I0420 01:13:49.403471  128503 system_pods.go:74] duration metric: took 6.286667ms to wait for pod list to return data ...
	I0420 01:13:49.403483  128503 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:13:49.418629  128503 default_sa.go:45] found service account: "default"
	I0420 01:13:49.418662  128503 default_sa.go:55] duration metric: took 15.167766ms for default service account to be created ...
	I0420 01:13:49.418679  128503 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:13:49.623744  128503 system_pods.go:86] 8 kube-system pods found
	I0420 01:13:49.623779  128503 system_pods.go:89] "coredns-7db6d8ff4d-4gkhp" [e6e4077c-5116-41eb-bb9d-8a4f0e382f90] Running
	I0420 01:13:49.623787  128503 system_pods.go:89] "etcd-kindnet-831611" [a18e4e12-fbd5-4dcc-b5d1-0b80a9d25cd4] Running
	I0420 01:13:49.623794  128503 system_pods.go:89] "kindnet-n7m4d" [61e46fed-c489-4040-8e44-3064d4d13ebe] Running
	I0420 01:13:49.623814  128503 system_pods.go:89] "kube-apiserver-kindnet-831611" [d198078a-371f-4ad3-9852-c59fd15eb568] Running
	I0420 01:13:49.623821  128503 system_pods.go:89] "kube-controller-manager-kindnet-831611" [0e29d6eb-126a-4e97-a466-c23efb454c9d] Running
	I0420 01:13:49.623828  128503 system_pods.go:89] "kube-proxy-pcgnb" [e4a54e64-90ab-4cbc-8f39-aab327f6a793] Running
	I0420 01:13:49.623834  128503 system_pods.go:89] "kube-scheduler-kindnet-831611" [4e5ea430-2fcc-4278-9386-d84d3efba723] Running
	I0420 01:13:49.623840  128503 system_pods.go:89] "storage-provisioner" [1737ec70-fba9-43a5-a4f9-4f4e587f2801] Running
	I0420 01:13:49.623848  128503 system_pods.go:126] duration metric: took 205.163487ms to wait for k8s-apps to be running ...
	I0420 01:13:49.623863  128503 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:13:49.623914  128503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:13:49.643091  128503 system_svc.go:56] duration metric: took 19.215088ms WaitForService to wait for kubelet
	I0420 01:13:49.643130  128503 kubeadm.go:576] duration metric: took 6.525400924s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:13:49.643185  128503 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:13:49.820966  128503 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:13:49.821004  128503 node_conditions.go:123] node cpu capacity is 2
	I0420 01:13:49.821020  128503 node_conditions.go:105] duration metric: took 177.827615ms to run NodePressure ...
	I0420 01:13:49.821036  128503 start.go:240] waiting for startup goroutines ...
	I0420 01:13:49.821047  128503 start.go:245] waiting for cluster config update ...
	I0420 01:13:49.821061  128503 start.go:254] writing updated cluster config ...
	I0420 01:13:49.821399  128503 ssh_runner.go:195] Run: rm -f paused
	I0420 01:13:49.883208  128503 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:13:49.886126  128503 out.go:177] * Done! kubectl is now configured to use "kindnet-831611" cluster and "default" namespace by default
	I0420 01:13:51.319037  130085 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.602368031s)
	I0420 01:13:51.319074  130085 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:13:51.319136  130085 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:13:51.325155  130085 start.go:562] Will wait 60s for crictl version
	I0420 01:13:51.325239  130085 ssh_runner.go:195] Run: which crictl
	I0420 01:13:51.330710  130085 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:13:51.373046  130085 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:13:51.373147  130085 ssh_runner.go:195] Run: crio --version
	I0420 01:13:51.410253  130085 ssh_runner.go:195] Run: crio --version
	I0420 01:13:51.448840  130085 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:13:49.660933  128374 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrzj" in "kube-system" namespace has status "Ready":"False"
	I0420 01:13:52.161491  128374 pod_ready.go:102] pod "coredns-7db6d8ff4d-thrzj" in "kube-system" namespace has status "Ready":"False"
	I0420 01:13:52.662842  128374 pod_ready.go:92] pod "coredns-7db6d8ff4d-thrzj" in "kube-system" namespace has status "Ready":"True"
	I0420 01:13:52.662872  128374 pod_ready.go:81] duration metric: took 28.009792727s for pod "coredns-7db6d8ff4d-thrzj" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:52.662886  128374 pod_ready.go:78] waiting up to 15m0s for pod "etcd-enable-default-cni-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:52.671266  128374 pod_ready.go:92] pod "etcd-enable-default-cni-831611" in "kube-system" namespace has status "Ready":"True"
	I0420 01:13:52.671292  128374 pod_ready.go:81] duration metric: took 8.397149ms for pod "etcd-enable-default-cni-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:52.671308  128374 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:52.678604  128374 pod_ready.go:92] pod "kube-apiserver-enable-default-cni-831611" in "kube-system" namespace has status "Ready":"True"
	I0420 01:13:52.678699  128374 pod_ready.go:81] duration metric: took 7.381478ms for pod "kube-apiserver-enable-default-cni-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:52.678748  128374 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:52.685603  128374 pod_ready.go:92] pod "kube-controller-manager-enable-default-cni-831611" in "kube-system" namespace has status "Ready":"True"
	I0420 01:13:52.685640  128374 pod_ready.go:81] duration metric: took 6.88057ms for pod "kube-controller-manager-enable-default-cni-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:52.685654  128374 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-5mb7c" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:52.719699  128374 pod_ready.go:92] pod "kube-proxy-5mb7c" in "kube-system" namespace has status "Ready":"True"
	I0420 01:13:52.719733  128374 pod_ready.go:81] duration metric: took 34.070015ms for pod "kube-proxy-5mb7c" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:52.719748  128374 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:53.059359  128374 pod_ready.go:92] pod "kube-scheduler-enable-default-cni-831611" in "kube-system" namespace has status "Ready":"True"
	I0420 01:13:53.059385  128374 pod_ready.go:81] duration metric: took 339.627976ms for pod "kube-scheduler-enable-default-cni-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:13:53.059397  128374 pod_ready.go:38] duration metric: took 39.94372891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:13:53.059419  128374 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:13:53.059487  128374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:13:53.084223  128374 api_server.go:72] duration metric: took 41.469462019s to wait for apiserver process to appear ...
	I0420 01:13:53.084257  128374 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:13:53.084286  128374 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0420 01:13:53.090158  128374 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0420 01:13:53.091566  128374 api_server.go:141] control plane version: v1.30.0
	I0420 01:13:53.091648  128374 api_server.go:131] duration metric: took 7.32867ms to wait for apiserver health ...
	I0420 01:13:53.091661  128374 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:13:53.263867  128374 system_pods.go:59] 7 kube-system pods found
	I0420 01:13:53.263909  128374 system_pods.go:61] "coredns-7db6d8ff4d-thrzj" [fc000686-6dfa-4bec-b41d-5146cfedca5c] Running
	I0420 01:13:53.263917  128374 system_pods.go:61] "etcd-enable-default-cni-831611" [914ba6e5-22a3-48a6-8981-033d589f6321] Running
	I0420 01:13:53.263923  128374 system_pods.go:61] "kube-apiserver-enable-default-cni-831611" [c8e4a8b5-66e3-48a7-a3f3-fac7fce0fdbf] Running
	I0420 01:13:53.263929  128374 system_pods.go:61] "kube-controller-manager-enable-default-cni-831611" [c3861808-aed2-4096-99e8-cffc34f02365] Running
	I0420 01:13:53.263934  128374 system_pods.go:61] "kube-proxy-5mb7c" [1e1f00db-c1e8-41a3-ae5a-839e580909d4] Running
	I0420 01:13:53.263939  128374 system_pods.go:61] "kube-scheduler-enable-default-cni-831611" [2893e775-4659-43ac-b0c3-23bfee6f1b48] Running
	I0420 01:13:53.263944  128374 system_pods.go:61] "storage-provisioner" [a3ed8d8a-42c7-42b7-8976-1ff040ae5653] Running
	I0420 01:13:53.263951  128374 system_pods.go:74] duration metric: took 172.283481ms to wait for pod list to return data ...
	I0420 01:13:53.263961  128374 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:13:51.450102  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetIP
	I0420 01:13:51.453288  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:51.453692  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:00:79", ip: ""} in network mk-kubernetes-upgrade-345460: {Iface:virbr4 ExpiryTime:2024-04-20 02:12:00 +0000 UTC Type:0 Mac:52:54:00:d3:00:79 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:kubernetes-upgrade-345460 Clientid:01:52:54:00:d3:00:79}
	I0420 01:13:51.453725  130085 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined IP address 192.168.50.68 and MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:13:51.454131  130085 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0420 01:13:51.459583  130085 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-345460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-345460 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:13:51.459727  130085 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:13:51.459784  130085 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:13:51.522704  130085 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:13:51.522734  130085 crio.go:433] Images already preloaded, skipping extraction
	I0420 01:13:51.522791  130085 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:13:51.573899  130085 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:13:51.573930  130085 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:13:51.573941  130085 kubeadm.go:928] updating node { 192.168.50.68 8443 v1.30.0 crio true true} ...
	I0420 01:13:51.574093  130085 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-345460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-345460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:13:51.574179  130085 ssh_runner.go:195] Run: crio config
	I0420 01:13:51.641051  130085 cni.go:84] Creating CNI manager for ""
	I0420 01:13:51.641087  130085 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:13:51.641104  130085 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:13:51.641134  130085 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.68 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-345460 NodeName:kubernetes-upgrade-345460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:13:51.641368  130085 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-345460"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.68
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.68"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:13:51.641459  130085 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:13:51.658528  130085 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:13:51.658616  130085 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:13:51.670491  130085 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0420 01:13:51.694005  130085 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:13:51.716448  130085 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0420 01:13:51.739644  130085 ssh_runner.go:195] Run: grep 192.168.50.68	control-plane.minikube.internal$ /etc/hosts
	I0420 01:13:51.744428  130085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:13:51.913210  130085 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:13:51.930756  130085 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460 for IP: 192.168.50.68
	I0420 01:13:51.930780  130085 certs.go:194] generating shared ca certs ...
	I0420 01:13:51.930795  130085 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:13:51.930937  130085 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:13:51.931011  130085 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:13:51.931027  130085 certs.go:256] generating profile certs ...
	I0420 01:13:51.931163  130085 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/client.key
	I0420 01:13:51.931217  130085 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/apiserver.key.514fbe5a
	I0420 01:13:51.931247  130085 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/proxy-client.key
	I0420 01:13:51.931353  130085 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:13:51.931386  130085 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:13:51.931393  130085 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:13:51.931414  130085 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:13:51.931439  130085 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:13:51.931463  130085 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:13:51.931501  130085 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:13:51.932219  130085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:13:51.961049  130085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:13:51.993422  130085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:13:52.021868  130085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:13:52.052926  130085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0420 01:13:52.083351  130085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:13:52.116883  130085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:13:52.148875  130085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:13:52.186068  130085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:13:52.222200  130085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:13:52.256196  130085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:13:52.288547  130085 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:13:52.312832  130085 ssh_runner.go:195] Run: openssl version
	I0420 01:13:52.321899  130085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:13:52.339973  130085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:13:52.346454  130085 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:13:52.346548  130085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:13:52.354121  130085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:13:52.365423  130085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:13:52.378843  130085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:13:52.384863  130085 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:13:52.384937  130085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:13:52.392578  130085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:13:52.404565  130085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:13:52.418656  130085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:13:52.425848  130085 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:13:52.425947  130085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:13:52.434920  130085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:13:52.451379  130085 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:13:52.458920  130085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:13:52.466723  130085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:13:52.474184  130085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:13:52.481850  130085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:13:52.492076  130085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:13:52.499997  130085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:13:52.509066  130085 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-345460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-345460 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:13:52.509194  130085 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:13:52.509331  130085 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:13:52.562757  130085 cri.go:89] found id: "409a2ff29fe002495cb6ff266a8f59fed390a8916b6dd3d24e49bf3f4eacc1e8"
	I0420 01:13:52.562796  130085 cri.go:89] found id: "204485c6c0d93bae9e93b18bf168eb4ccfbf0fb532f1cc1ec8ddded0ea5f164d"
	I0420 01:13:52.562802  130085 cri.go:89] found id: "b7409ae78cdfad4e4dcd3e46d0fe33d68639acecf5e62a6503f4952f7c25c678"
	I0420 01:13:52.562822  130085 cri.go:89] found id: "770a82a274d45a95b6718a8ea9771145cdfc7e220ba1948f20b2ad1289965045"
	I0420 01:13:52.562827  130085 cri.go:89] found id: "e4aa6d261137c7d171dac541ec455873854b64b204eecf3dde091ea7c567849a"
	I0420 01:13:52.562834  130085 cri.go:89] found id: "95607bfe05b9aec4a2a95dae72ba81d9ddba3a633a67aa4353eb8594365795eb"
	I0420 01:13:52.562838  130085 cri.go:89] found id: "214db0345eb9ea03dc56feb8473a5488a19b93c3866ec17447516b9b37a93491"
	I0420 01:13:52.562841  130085 cri.go:89] found id: "6cef60b4f893b33e2557fd61a691eb6ce859313116f5596c7062ad3fce2405c0"
	I0420 01:13:52.562846  130085 cri.go:89] found id: ""
	I0420 01:13:52.562904  130085 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.917015201Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6ff2ce3a58cdd10b5c33a1ebd08b1f5e823d609ef01e97d77fe1cfdf4ad6dd52,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6578494d-6974-4d36-be38-e29cd4360d54,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713575640939842473,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6578494d-6974-4d36-be38-e29cd4360d54,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\
":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-20T01:14:00.610370355Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:69011ff320d6ebcc085582c7f536b68bfeea0725937146cda30d6a3cfd8f3449,Metadata:&PodSandboxMetadata{Name:kube-proxy-ww5tg,Uid:d5469f00-e1bf-442d-a31e-c567dd8057df,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713575640936128288,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ww5tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5469f00-e1bf-442d-a
31e-c567dd8057df,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T01:14:00.610368039Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0d115d9219fbcbcbdcf164122c6e298dd841248f6e581a59e49d9eae53f86eea,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-345460,Uid:0c6292e0866ee3cde7dabd0701dde31a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713575636060882947,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6292e0866ee3cde7dabd0701dde31a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.68:8443,kubernetes.io/config.hash: 0c6292e0866ee3cde7dabd0701dde31a,kubernetes.io/config.seen: 2024-04-20T01:13:55.607437708Z,kubernetes.io/config.source: file,},Runt
imeHandler:,},&PodSandbox{Id:8b1d00c6f5676218c54df7a1bbe27ecd727717a0cf55eb0f4765acd35740f1b0,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-2rg9m,Uid:8b4b1d84-0d3a-46fa-a156-bcd858000abf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713575632899282134,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4b1d84-0d3a-46fa-a156-bcd858000abf,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T01:12:40.907395712Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b24e25f89e978fda33052d86b2ac676d1b25653330b9c58afcddefee6e90767f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-345460,Uid:27ac40f10df842a2537f73774f66d40a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713575632812276415,Labels:map[string]string{component: kube-scheduler,io.kubernetes.conta
iner.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ac40f10df842a2537f73774f66d40a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 27ac40f10df842a2537f73774f66d40a,kubernetes.io/config.seen: 2024-04-20T01:12:21.010637065Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a7e596a3ac92b2ca5d32012bd9500c63d414b36fe2ab51b487fc8fe4048dac2b,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-hzndz,Uid:d9126f8a-d858-468a-96b4-347ae95b8ea6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713575632801447122,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9126f8a-d858-468a-96b4-347ae95b8ea6,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T01:12:40.952258239Z,kubernetes.i
o/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0f2355c092ce2f0b16cf98d3e992786cd05fd65615c25a2b7ca495303f21127c,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-345460,Uid:08d520bb0f1a9c6e38541787795aa99a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713575632790932006,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d520bb0f1a9c6e38541787795aa99a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.68:2379,kubernetes.io/config.hash: 08d520bb0f1a9c6e38541787795aa99a,kubernetes.io/config.seen: 2024-04-20T01:12:21.057984025Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:921e757ac71d61635db91f6038fb67f9e03c8d2e86930a6213ba2ba6664e350a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-345460,Uid:76fe84f3e597d330b7a31
0e0b181ada9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713575632749989120,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fe84f3e597d330b7a310e0b181ada9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 76fe84f3e597d330b7a310e0b181ada9,kubernetes.io/config.seen: 2024-04-20T01:12:21.007746619Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a0a804ccc81902d3d539f14e061d6f241648979f74f1b464b5d756ba597f4207,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-hzndz,Uid:d9126f8a-d858-468a-96b4-347ae95b8ea6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713575561279643555,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d9126f8a-d858-468a-96b4-347ae95b8ea6,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T01:12:40.952258239Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6a29c5754de82c921d4d4c56712a5c1075b1b3ab929b4364bf10f5a11962b3fe,Metadata:&PodSandboxMetadata{Name:kube-proxy-ww5tg,Uid:d5469f00-e1bf-442d-a31e-c567dd8057df,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713575561278273198,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ww5tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5469f00-e1bf-442d-a31e-c567dd8057df,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T01:12:40.951940290Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:54b06146567eb22e7c7e0b880fee228b748a9d64996469cd2d31afb6582ade97,Metadata:&PodSandboxMetadata{
Name:coredns-7db6d8ff4d-2rg9m,Uid:8b4b1d84-0d3a-46fa-a156-bcd858000abf,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713575561214943028,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4b1d84-0d3a-46fa-a156-bcd858000abf,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T01:12:40.907395712Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d953f428ec826e075b9b3d26c951e788bbe299e6dd7430d681b1827f7c480f2e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6578494d-6974-4d36-be38-e29cd4360d54,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713575561199774854,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 6578494d-6974-4d36-be38-e29cd4360d54,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-20T01:12:40.888041352Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:214ca4a05320b6835b91ee33c0ab1d5d9e436bacac3cbdc44175a56ae533f1e3,Metadata
:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-345460,Uid:08d520bb0f1a9c6e38541787795aa99a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713575541540365828,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d520bb0f1a9c6e38541787795aa99a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.68:2379,kubernetes.io/config.hash: 08d520bb0f1a9c6e38541787795aa99a,kubernetes.io/config.seen: 2024-04-20T01:12:21.057984025Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:446211727cffaff3229e02f7f265782b6c4b2e07299a387681af32717b815144,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-345460,Uid:27ac40f10df842a2537f73774f66d40a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713575541528727328,Labels:map[string]string{component:
kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ac40f10df842a2537f73774f66d40a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 27ac40f10df842a2537f73774f66d40a,kubernetes.io/config.seen: 2024-04-20T01:12:21.010637065Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ace44ed799a064daac4b03ffe1eaf8785e842639fe097e9205b25a67245fb228,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-345460,Uid:0c6292e0866ee3cde7dabd0701dde31a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713575541497644295,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6292e0866ee3cde7dabd0701dde31a,tier: control-plane,},Annotations:map[string]string{kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.68:8443,kubernetes.io/config.hash: 0c6292e0866ee3cde7dabd0701dde31a,kubernetes.io/config.seen: 2024-04-20T01:12:20.970932657Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1cce05180201067ddb2591d4acf2db0b07084175fc55760a79869dac3547bd63,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-345460,Uid:76fe84f3e597d330b7a310e0b181ada9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713575541494534061,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fe84f3e597d330b7a310e0b181ada9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 76fe84f3e597d330b7a310e0b181ada9,kubernetes.io/config.seen: 2024-04-20T01:12:21.007746619Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},
}" file="otel-collector/interceptors.go:74" id=641d704f-1833-4a01-b9f8-babab69e3e04 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.918261033Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df640637-1e3c-4a15-b857-7c5842b746d9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.918610565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df640637-1e3c-4a15-b857-7c5842b746d9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.918981863Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52e233b30156c0ab0612b4d383239b775e406f0532f5f673d9ab5ae66fdf88ab,PodSandboxId:69011ff320d6ebcc085582c7f536b68bfeea0725937146cda30d6a3cfd8f3449,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713575641346194504,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ww5tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5469f00-e1bf-442d-a31e-c567dd8057df,},Annotations:map[string]string{io.kubernetes.container.hash: d561d495,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa50eaf2f9e436e759b1652b6514e374b7463523e36c8974d7aa6bc6010ee6b,PodSandboxId:6ff2ce3a58cdd10b5c33a1ebd08b1f5e823d609ef01e97d77fe1cfdf4ad6dd52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713575641279446628,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6578494d-6974-4d36-be38-e29cd4360d54,},Annotations:map[string]string{io.kubernetes.container.hash: c9a143ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29963ff812db0a5fb85e1593357483a2dfac1d567b60f587a7cc77a8f3452fce,PodSandboxId:a7e596a3ac92b2ca5d32012bd9500c63d414b36fe2ab51b487fc8fe4048dac2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713575640973108975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9126f8a-d858-468a-96b4-347ae95b8ea6,},Annotations:map[string]string{io.kubernetes.container.hash: bc4196fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736a2d1c817fca5294bf89b8cdf2765bd79d1b192bf00bc1452bf1525857b298,PodSandboxId:8b1d00c6f5676218c54df7a1bbe27ecd727717a0cf55eb0f4765acd35740f1b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713575641013805372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4b1d84-0d3a-46fa-a156-bc
d858000abf,},Annotations:map[string]string{io.kubernetes.container.hash: 8d726b9d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:435308e05c9e1f27129aee9dd97ae9d0ee0cc0c39d9b556713c08b910b2d6640,PodSandboxId:0d115d9219fbcbcbdcf164122c6e298dd841248f6e581a59e49d9eae53f86eea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713575636303371490,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6292e0866ee3cde7dabd0701dde31a,},Annotations:map[string]string{io.kubernetes.container.hash: 2c0cfe71,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:669e1873a37a1505fc7ae863748c84fa977a677c9c723612a3ff173e8b21a63c,PodSandboxId:b24e25f89e978fda33052d86b2ac676d1b25653330b9c58afcddefee6e90767f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713575633274141430,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ac40f10df842a2537f73774f66d40a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39279919f9447651b6e3bd391fded1960ba884729dac51f26166a1d1d7c875de,PodSandboxId:0f2355c092ce2f0b16cf98d3e992786cd05fd65615c25a2b7ca495303f21127c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713575633183904942,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d520bb0f1a9c6e38541787795aa99a,},Annotations:map[string]string{io.kubernetes.container.hash: c3aaad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5511ddce4f6a01c3ed30a144a4ba8fbe48b554bff014947c9e7eab807709cbe8,PodSandboxId:921e757ac71d61635db91f6038fb67f9e03c8d2e86930a6213ba2ba6664e350a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713575633124902394,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fe84f3e597d330b7a310e0b181ada9,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409a2ff29fe002495cb6ff266a8f59fed390a8916b6dd3d24e49bf3f4eacc1e8,PodSandboxId:d953f428ec826e075b9b3d26c951e788bbe299e6dd7430d681b1827f7c480f2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713575592403240641,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6578494d-6974-4d36-be38-e29cd4360d54,},Annotations:map[string]string{io.kubernetes.container.hash: c9a143ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204485c6c0d93bae9e93b18bf168eb4ccfbf0fb532f1cc1ec8ddded0ea5f164d,PodSandboxId:54b06146567eb22e7c7e0b880fee228b748a9d64996469cd2d31afb6582ade97,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713575561963063315,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4b1d84-0d3a-46fa-a156-bcd858000abf,},Annotations:map[string]string{io.kubernetes.container.hash: 8d726b9d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7409ae78cdfad4e4dcd3e46d0fe33d68639acecf5e62a6503f4952f7c25c678,PodSandboxId:a0a804ccc81902d3d539f14e061d6f241648979f74f1b464b5d756ba597f4207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713575561903018831,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9126f8a-d858-468a-96b4-347ae95b8ea6,},Annotations:map[string]string{io.kubernetes.container.hash: bc4196fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770a82a274d45a95b6718a8ea9771145cdfc7e220ba1948f20b2ad1289965045,PodSandboxId:6a29c5754de82c921d4d4c56712a5c1075b1b3ab929b4364bf10f5a11962
b3fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713575561477393599,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ww5tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5469f00-e1bf-442d-a31e-c567dd8057df,},Annotations:map[string]string{io.kubernetes.container.hash: d561d495,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4aa6d261137c7d171dac541ec455873854b64b204eecf3dde091ea7c567849a,PodSandboxId:446211727cffaff3229e02f7f265782b6c4b2e07299a387681af32717b815144,Metadata:&ContainerMetadata{N
ame:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713575541946474579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ac40f10df842a2537f73774f66d40a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214db0345eb9ea03dc56feb8473a5488a19b93c3866ec17447516b9b37a93491,PodSandboxId:1cce05180201067ddb2591d4acf2db0b07084175fc55760a79869dac3547bd63,Metadata:&ContainerMetadata{Name:ku
be-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713575541870355580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fe84f3e597d330b7a310e0b181ada9,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95607bfe05b9aec4a2a95dae72ba81d9ddba3a633a67aa4353eb8594365795eb,PodSandboxId:214ca4a05320b6835b91ee33c0ab1d5d9e436bacac3cbdc44175a56ae533f1e3,Metadata:&Cont
ainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713575541905989861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d520bb0f1a9c6e38541787795aa99a,},Annotations:map[string]string{io.kubernetes.container.hash: c3aaad9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cef60b4f893b33e2557fd61a691eb6ce859313116f5596c7062ad3fce2405c0,PodSandboxId:ace44ed799a064daac4b03ffe1eaf8785e842639fe097e9205b25a67245fb228,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713575541753182023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6292e0866ee3cde7dabd0701dde31a,},Annotations:map[string]string{io.kubernetes.container.hash: 2c0cfe71,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df640637-1e3c-4a15-b857-7c5842b746d9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.923130305Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df7a4979-2804-498c-914d-200bc0ec5355 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.923645238Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df7a4979-2804-498c-914d-200bc0ec5355 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.925176068Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45a6bf35-3869-4579-9793-2f7d542ff78b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.925867923Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713575644925846649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45a6bf35-3869-4579-9793-2f7d542ff78b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.926906154Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=616f651c-1f84-4bdb-8334-b202c7c93ca6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.926985339Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=616f651c-1f84-4bdb-8334-b202c7c93ca6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.927385420Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52e233b30156c0ab0612b4d383239b775e406f0532f5f673d9ab5ae66fdf88ab,PodSandboxId:69011ff320d6ebcc085582c7f536b68bfeea0725937146cda30d6a3cfd8f3449,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713575641346194504,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ww5tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5469f00-e1bf-442d-a31e-c567dd8057df,},Annotations:map[string]string{io.kubernetes.container.hash: d561d495,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa50eaf2f9e436e759b1652b6514e374b7463523e36c8974d7aa6bc6010ee6b,PodSandboxId:6ff2ce3a58cdd10b5c33a1ebd08b1f5e823d609ef01e97d77fe1cfdf4ad6dd52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713575641279446628,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6578494d-6974-4d36-be38-e29cd4360d54,},Annotations:map[string]string{io.kubernetes.container.hash: c9a143ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29963ff812db0a5fb85e1593357483a2dfac1d567b60f587a7cc77a8f3452fce,PodSandboxId:a7e596a3ac92b2ca5d32012bd9500c63d414b36fe2ab51b487fc8fe4048dac2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713575640973108975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9126f8a-d858-468a-96b4-347ae95b8ea6,},Annotations:map[string]string{io.kubernetes.container.hash: bc4196fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736a2d1c817fca5294bf89b8cdf2765bd79d1b192bf00bc1452bf1525857b298,PodSandboxId:8b1d00c6f5676218c54df7a1bbe27ecd727717a0cf55eb0f4765acd35740f1b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713575641013805372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4b1d84-0d3a-46fa-a156-bc
d858000abf,},Annotations:map[string]string{io.kubernetes.container.hash: 8d726b9d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:435308e05c9e1f27129aee9dd97ae9d0ee0cc0c39d9b556713c08b910b2d6640,PodSandboxId:0d115d9219fbcbcbdcf164122c6e298dd841248f6e581a59e49d9eae53f86eea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713575636303371490,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6292e0866ee3cde7dabd0701dde31a,},Annotations:map[string]string{io.kubernetes.container.hash: 2c0cfe71,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:669e1873a37a1505fc7ae863748c84fa977a677c9c723612a3ff173e8b21a63c,PodSandboxId:b24e25f89e978fda33052d86b2ac676d1b25653330b9c58afcddefee6e90767f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713575633274141430,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ac40f10df842a2537f73774f66d40a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39279919f9447651b6e3bd391fded1960ba884729dac51f26166a1d1d7c875de,PodSandboxId:0f2355c092ce2f0b16cf98d3e992786cd05fd65615c25a2b7ca495303f21127c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713575633183904942,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d520bb0f1a9c6e38541787795aa99a,},Annotations:map[string]string{io.kubernetes.container.hash: c3aaad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5511ddce4f6a01c3ed30a144a4ba8fbe48b554bff014947c9e7eab807709cbe8,PodSandboxId:921e757ac71d61635db91f6038fb67f9e03c8d2e86930a6213ba2ba6664e350a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713575633124902394,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fe84f3e597d330b7a310e0b181ada9,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409a2ff29fe002495cb6ff266a8f59fed390a8916b6dd3d24e49bf3f4eacc1e8,PodSandboxId:d953f428ec826e075b9b3d26c951e788bbe299e6dd7430d681b1827f7c480f2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713575592403240641,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6578494d-6974-4d36-be38-e29cd4360d54,},Annotations:map[string]string{io.kubernetes.container.hash: c9a143ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204485c6c0d93bae9e93b18bf168eb4ccfbf0fb532f1cc1ec8ddded0ea5f164d,PodSandboxId:54b06146567eb22e7c7e0b880fee228b748a9d64996469cd2d31afb6582ade97,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713575561963063315,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4b1d84-0d3a-46fa-a156-bcd858000abf,},Annotations:map[string]string{io.kubernetes.container.hash: 8d726b9d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7409ae78cdfad4e4dcd3e46d0fe33d68639acecf5e62a6503f4952f7c25c678,PodSandboxId:a0a804ccc81902d3d539f14e061d6f241648979f74f1b464b5d756ba597f4207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713575561903018831,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9126f8a-d858-468a-96b4-347ae95b8ea6,},Annotations:map[string]string{io.kubernetes.container.hash: bc4196fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770a82a274d45a95b6718a8ea9771145cdfc7e220ba1948f20b2ad1289965045,PodSandboxId:6a29c5754de82c921d4d4c56712a5c1075b1b3ab929b4364bf10f5a11962
b3fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713575561477393599,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ww5tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5469f00-e1bf-442d-a31e-c567dd8057df,},Annotations:map[string]string{io.kubernetes.container.hash: d561d495,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4aa6d261137c7d171dac541ec455873854b64b204eecf3dde091ea7c567849a,PodSandboxId:446211727cffaff3229e02f7f265782b6c4b2e07299a387681af32717b815144,Metadata:&ContainerMetadata{N
ame:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713575541946474579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ac40f10df842a2537f73774f66d40a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214db0345eb9ea03dc56feb8473a5488a19b93c3866ec17447516b9b37a93491,PodSandboxId:1cce05180201067ddb2591d4acf2db0b07084175fc55760a79869dac3547bd63,Metadata:&ContainerMetadata{Name:ku
be-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713575541870355580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fe84f3e597d330b7a310e0b181ada9,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95607bfe05b9aec4a2a95dae72ba81d9ddba3a633a67aa4353eb8594365795eb,PodSandboxId:214ca4a05320b6835b91ee33c0ab1d5d9e436bacac3cbdc44175a56ae533f1e3,Metadata:&Cont
ainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713575541905989861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d520bb0f1a9c6e38541787795aa99a,},Annotations:map[string]string{io.kubernetes.container.hash: c3aaad9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cef60b4f893b33e2557fd61a691eb6ce859313116f5596c7062ad3fce2405c0,PodSandboxId:ace44ed799a064daac4b03ffe1eaf8785e842639fe097e9205b25a67245fb228,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713575541753182023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6292e0866ee3cde7dabd0701dde31a,},Annotations:map[string]string{io.kubernetes.container.hash: 2c0cfe71,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=616f651c-1f84-4bdb-8334-b202c7c93ca6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.981959067Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=309879d4-aa8f-4eb0-8316-8b79f2c09f1f name=/runtime.v1.RuntimeService/Version
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.982097523Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=309879d4-aa8f-4eb0-8316-8b79f2c09f1f name=/runtime.v1.RuntimeService/Version
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.983767400Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13477bec-4d57-4823-a503-46c22ef07fc5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.984535665Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713575644984511019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13477bec-4d57-4823-a503-46c22ef07fc5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.985364991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f22b6e80-5e65-4ad3-bfba-251c2c530fad name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.985418029Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f22b6e80-5e65-4ad3-bfba-251c2c530fad name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:14:04 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:04.985850708Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52e233b30156c0ab0612b4d383239b775e406f0532f5f673d9ab5ae66fdf88ab,PodSandboxId:69011ff320d6ebcc085582c7f536b68bfeea0725937146cda30d6a3cfd8f3449,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713575641346194504,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ww5tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5469f00-e1bf-442d-a31e-c567dd8057df,},Annotations:map[string]string{io.kubernetes.container.hash: d561d495,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa50eaf2f9e436e759b1652b6514e374b7463523e36c8974d7aa6bc6010ee6b,PodSandboxId:6ff2ce3a58cdd10b5c33a1ebd08b1f5e823d609ef01e97d77fe1cfdf4ad6dd52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713575641279446628,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6578494d-6974-4d36-be38-e29cd4360d54,},Annotations:map[string]string{io.kubernetes.container.hash: c9a143ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29963ff812db0a5fb85e1593357483a2dfac1d567b60f587a7cc77a8f3452fce,PodSandboxId:a7e596a3ac92b2ca5d32012bd9500c63d414b36fe2ab51b487fc8fe4048dac2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713575640973108975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9126f8a-d858-468a-96b4-347ae95b8ea6,},Annotations:map[string]string{io.kubernetes.container.hash: bc4196fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736a2d1c817fca5294bf89b8cdf2765bd79d1b192bf00bc1452bf1525857b298,PodSandboxId:8b1d00c6f5676218c54df7a1bbe27ecd727717a0cf55eb0f4765acd35740f1b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713575641013805372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4b1d84-0d3a-46fa-a156-bc
d858000abf,},Annotations:map[string]string{io.kubernetes.container.hash: 8d726b9d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:435308e05c9e1f27129aee9dd97ae9d0ee0cc0c39d9b556713c08b910b2d6640,PodSandboxId:0d115d9219fbcbcbdcf164122c6e298dd841248f6e581a59e49d9eae53f86eea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713575636303371490,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6292e0866ee3cde7dabd0701dde31a,},Annotations:map[string]string{io.kubernetes.container.hash: 2c0cfe71,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:669e1873a37a1505fc7ae863748c84fa977a677c9c723612a3ff173e8b21a63c,PodSandboxId:b24e25f89e978fda33052d86b2ac676d1b25653330b9c58afcddefee6e90767f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713575633274141430,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ac40f10df842a2537f73774f66d40a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39279919f9447651b6e3bd391fded1960ba884729dac51f26166a1d1d7c875de,PodSandboxId:0f2355c092ce2f0b16cf98d3e992786cd05fd65615c25a2b7ca495303f21127c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713575633183904942,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d520bb0f1a9c6e38541787795aa99a,},Annotations:map[string]string{io.kubernetes.container.hash: c3aaad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5511ddce4f6a01c3ed30a144a4ba8fbe48b554bff014947c9e7eab807709cbe8,PodSandboxId:921e757ac71d61635db91f6038fb67f9e03c8d2e86930a6213ba2ba6664e350a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713575633124902394,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fe84f3e597d330b7a310e0b181ada9,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409a2ff29fe002495cb6ff266a8f59fed390a8916b6dd3d24e49bf3f4eacc1e8,PodSandboxId:d953f428ec826e075b9b3d26c951e788bbe299e6dd7430d681b1827f7c480f2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713575592403240641,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6578494d-6974-4d36-be38-e29cd4360d54,},Annotations:map[string]string{io.kubernetes.container.hash: c9a143ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204485c6c0d93bae9e93b18bf168eb4ccfbf0fb532f1cc1ec8ddded0ea5f164d,PodSandboxId:54b06146567eb22e7c7e0b880fee228b748a9d64996469cd2d31afb6582ade97,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713575561963063315,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4b1d84-0d3a-46fa-a156-bcd858000abf,},Annotations:map[string]string{io.kubernetes.container.hash: 8d726b9d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7409ae78cdfad4e4dcd3e46d0fe33d68639acecf5e62a6503f4952f7c25c678,PodSandboxId:a0a804ccc81902d3d539f14e061d6f241648979f74f1b464b5d756ba597f4207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713575561903018831,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9126f8a-d858-468a-96b4-347ae95b8ea6,},Annotations:map[string]string{io.kubernetes.container.hash: bc4196fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770a82a274d45a95b6718a8ea9771145cdfc7e220ba1948f20b2ad1289965045,PodSandboxId:6a29c5754de82c921d4d4c56712a5c1075b1b3ab929b4364bf10f5a11962
b3fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713575561477393599,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ww5tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5469f00-e1bf-442d-a31e-c567dd8057df,},Annotations:map[string]string{io.kubernetes.container.hash: d561d495,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4aa6d261137c7d171dac541ec455873854b64b204eecf3dde091ea7c567849a,PodSandboxId:446211727cffaff3229e02f7f265782b6c4b2e07299a387681af32717b815144,Metadata:&ContainerMetadata{N
ame:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713575541946474579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ac40f10df842a2537f73774f66d40a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214db0345eb9ea03dc56feb8473a5488a19b93c3866ec17447516b9b37a93491,PodSandboxId:1cce05180201067ddb2591d4acf2db0b07084175fc55760a79869dac3547bd63,Metadata:&ContainerMetadata{Name:ku
be-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713575541870355580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fe84f3e597d330b7a310e0b181ada9,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95607bfe05b9aec4a2a95dae72ba81d9ddba3a633a67aa4353eb8594365795eb,PodSandboxId:214ca4a05320b6835b91ee33c0ab1d5d9e436bacac3cbdc44175a56ae533f1e3,Metadata:&Cont
ainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713575541905989861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d520bb0f1a9c6e38541787795aa99a,},Annotations:map[string]string{io.kubernetes.container.hash: c3aaad9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cef60b4f893b33e2557fd61a691eb6ce859313116f5596c7062ad3fce2405c0,PodSandboxId:ace44ed799a064daac4b03ffe1eaf8785e842639fe097e9205b25a67245fb228,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713575541753182023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6292e0866ee3cde7dabd0701dde31a,},Annotations:map[string]string{io.kubernetes.container.hash: 2c0cfe71,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f22b6e80-5e65-4ad3-bfba-251c2c530fad name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:14:05 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:05.028905023Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6617c8b-d572-40a5-a41d-b0b094f85a31 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:14:05 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:05.028976688Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6617c8b-d572-40a5-a41d-b0b094f85a31 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:14:05 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:05.030471235Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b978ca2e-dd8c-4475-8129-9d183c1e11bb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:14:05 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:05.031288217Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713575645031264175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b978ca2e-dd8c-4475-8129-9d183c1e11bb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:14:05 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:05.032124146Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f9a3153-f038-4a40-859e-77b1c09b5307 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:14:05 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:05.032196354Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f9a3153-f038-4a40-859e-77b1c09b5307 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:14:05 kubernetes-upgrade-345460 crio[2445]: time="2024-04-20 01:14:05.032503956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52e233b30156c0ab0612b4d383239b775e406f0532f5f673d9ab5ae66fdf88ab,PodSandboxId:69011ff320d6ebcc085582c7f536b68bfeea0725937146cda30d6a3cfd8f3449,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713575641346194504,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ww5tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5469f00-e1bf-442d-a31e-c567dd8057df,},Annotations:map[string]string{io.kubernetes.container.hash: d561d495,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa50eaf2f9e436e759b1652b6514e374b7463523e36c8974d7aa6bc6010ee6b,PodSandboxId:6ff2ce3a58cdd10b5c33a1ebd08b1f5e823d609ef01e97d77fe1cfdf4ad6dd52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713575641279446628,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6578494d-6974-4d36-be38-e29cd4360d54,},Annotations:map[string]string{io.kubernetes.container.hash: c9a143ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29963ff812db0a5fb85e1593357483a2dfac1d567b60f587a7cc77a8f3452fce,PodSandboxId:a7e596a3ac92b2ca5d32012bd9500c63d414b36fe2ab51b487fc8fe4048dac2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713575640973108975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9126f8a-d858-468a-96b4-347ae95b8ea6,},Annotations:map[string]string{io.kubernetes.container.hash: bc4196fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736a2d1c817fca5294bf89b8cdf2765bd79d1b192bf00bc1452bf1525857b298,PodSandboxId:8b1d00c6f5676218c54df7a1bbe27ecd727717a0cf55eb0f4765acd35740f1b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713575641013805372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4b1d84-0d3a-46fa-a156-bc
d858000abf,},Annotations:map[string]string{io.kubernetes.container.hash: 8d726b9d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:435308e05c9e1f27129aee9dd97ae9d0ee0cc0c39d9b556713c08b910b2d6640,PodSandboxId:0d115d9219fbcbcbdcf164122c6e298dd841248f6e581a59e49d9eae53f86eea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713575636303371490,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6292e0866ee3cde7dabd0701dde31a,},Annotations:map[string]string{io.kubernetes.container.hash: 2c0cfe71,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:669e1873a37a1505fc7ae863748c84fa977a677c9c723612a3ff173e8b21a63c,PodSandboxId:b24e25f89e978fda33052d86b2ac676d1b25653330b9c58afcddefee6e90767f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713575633274141430,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ac40f10df842a2537f73774f66d40a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39279919f9447651b6e3bd391fded1960ba884729dac51f26166a1d1d7c875de,PodSandboxId:0f2355c092ce2f0b16cf98d3e992786cd05fd65615c25a2b7ca495303f21127c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713575633183904942,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d520bb0f1a9c6e38541787795aa99a,},Annotations:map[string]string{io.kubernetes.container.hash: c3aaad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5511ddce4f6a01c3ed30a144a4ba8fbe48b554bff014947c9e7eab807709cbe8,PodSandboxId:921e757ac71d61635db91f6038fb67f9e03c8d2e86930a6213ba2ba6664e350a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713575633124902394,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fe84f3e597d330b7a310e0b181ada9,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409a2ff29fe002495cb6ff266a8f59fed390a8916b6dd3d24e49bf3f4eacc1e8,PodSandboxId:d953f428ec826e075b9b3d26c951e788bbe299e6dd7430d681b1827f7c480f2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713575592403240641,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6578494d-6974-4d36-be38-e29cd4360d54,},Annotations:map[string]string{io.kubernetes.container.hash: c9a143ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204485c6c0d93bae9e93b18bf168eb4ccfbf0fb532f1cc1ec8ddded0ea5f164d,PodSandboxId:54b06146567eb22e7c7e0b880fee228b748a9d64996469cd2d31afb6582ade97,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713575561963063315,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4b1d84-0d3a-46fa-a156-bcd858000abf,},Annotations:map[string]string{io.kubernetes.container.hash: 8d726b9d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7409ae78cdfad4e4dcd3e46d0fe33d68639acecf5e62a6503f4952f7c25c678,PodSandboxId:a0a804ccc81902d3d539f14e061d6f241648979f74f1b464b5d756ba597f4207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713575561903018831,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hzndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9126f8a-d858-468a-96b4-347ae95b8ea6,},Annotations:map[string]string{io.kubernetes.container.hash: bc4196fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770a82a274d45a95b6718a8ea9771145cdfc7e220ba1948f20b2ad1289965045,PodSandboxId:6a29c5754de82c921d4d4c56712a5c1075b1b3ab929b4364bf10f5a11962
b3fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713575561477393599,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ww5tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5469f00-e1bf-442d-a31e-c567dd8057df,},Annotations:map[string]string{io.kubernetes.container.hash: d561d495,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4aa6d261137c7d171dac541ec455873854b64b204eecf3dde091ea7c567849a,PodSandboxId:446211727cffaff3229e02f7f265782b6c4b2e07299a387681af32717b815144,Metadata:&ContainerMetadata{N
ame:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713575541946474579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ac40f10df842a2537f73774f66d40a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214db0345eb9ea03dc56feb8473a5488a19b93c3866ec17447516b9b37a93491,PodSandboxId:1cce05180201067ddb2591d4acf2db0b07084175fc55760a79869dac3547bd63,Metadata:&ContainerMetadata{Name:ku
be-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713575541870355580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fe84f3e597d330b7a310e0b181ada9,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95607bfe05b9aec4a2a95dae72ba81d9ddba3a633a67aa4353eb8594365795eb,PodSandboxId:214ca4a05320b6835b91ee33c0ab1d5d9e436bacac3cbdc44175a56ae533f1e3,Metadata:&Cont
ainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713575541905989861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d520bb0f1a9c6e38541787795aa99a,},Annotations:map[string]string{io.kubernetes.container.hash: c3aaad9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cef60b4f893b33e2557fd61a691eb6ce859313116f5596c7062ad3fce2405c0,PodSandboxId:ace44ed799a064daac4b03ffe1eaf8785e842639fe097e9205b25a67245fb228,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713575541753182023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-345460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6292e0866ee3cde7dabd0701dde31a,},Annotations:map[string]string{io.kubernetes.container.hash: 2c0cfe71,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f9a3153-f038-4a40-859e-77b1c09b5307 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	52e233b30156c       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   3 seconds ago        Running             kube-proxy                1                   69011ff320d6e       kube-proxy-ww5tg
	cfa50eaf2f9e4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       2                   6ff2ce3a58cdd       storage-provisioner
	736a2d1c817fc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 seconds ago        Running             coredns                   1                   8b1d00c6f5676       coredns-7db6d8ff4d-2rg9m
	29963ff812db0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 seconds ago        Running             coredns                   1                   a7e596a3ac92b       coredns-7db6d8ff4d-hzndz
	435308e05c9e1       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   8 seconds ago        Running             kube-apiserver            1                   0d115d9219fbc       kube-apiserver-kubernetes-upgrade-345460
	669e1873a37a1       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   11 seconds ago       Running             kube-scheduler            1                   b24e25f89e978       kube-scheduler-kubernetes-upgrade-345460
	39279919f9447       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   11 seconds ago       Running             etcd                      1                   0f2355c092ce2       etcd-kubernetes-upgrade-345460
	5511ddce4f6a0       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   12 seconds ago       Running             kube-controller-manager   1                   921e757ac71d6       kube-controller-manager-kubernetes-upgrade-345460
	409a2ff29fe00       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   52 seconds ago       Exited              storage-provisioner       1                   d953f428ec826       storage-provisioner
	204485c6c0d93       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   54b06146567eb       coredns-7db6d8ff4d-2rg9m
	b7409ae78cdfa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   a0a804ccc8190       coredns-7db6d8ff4d-hzndz
	770a82a274d45       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   About a minute ago   Exited              kube-proxy                0                   6a29c5754de82       kube-proxy-ww5tg
	e4aa6d261137c       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   About a minute ago   Exited              kube-scheduler            0                   446211727cffa       kube-scheduler-kubernetes-upgrade-345460
	95607bfe05b9a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   About a minute ago   Exited              etcd                      0                   214ca4a05320b       etcd-kubernetes-upgrade-345460
	214db0345eb9e       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   About a minute ago   Exited              kube-controller-manager   0                   1cce051802010       kube-controller-manager-kubernetes-upgrade-345460
	6cef60b4f893b       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   About a minute ago   Exited              kube-apiserver            0                   ace44ed799a06       kube-apiserver-kubernetes-upgrade-345460
	
	
	==> coredns [204485c6c0d93bae9e93b18bf168eb4ccfbf0fb532f1cc1ec8ddded0ea5f164d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[836657761]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:12:42.144) (total time: 30002ms):
	Trace[836657761]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (01:13:12.146)
	Trace[836657761]: [30.002512552s] [30.002512552s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[505790993]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:12:42.144) (total time: 30002ms):
	Trace[505790993]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (01:13:12.146)
	Trace[505790993]: [30.002606751s] [30.002606751s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[266678360]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:12:42.144) (total time: 30003ms):
	Trace[266678360]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (01:13:12.146)
	Trace[266678360]: [30.003402937s] [30.003402937s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [29963ff812db0a5fb85e1593357483a2dfac1d567b60f587a7cc77a8f3452fce] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [736a2d1c817fca5294bf89b8cdf2765bd79d1b192bf00bc1452bf1525857b298] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b7409ae78cdfad4e4dcd3e46d0fe33d68639acecf5e62a6503f4952f7c25c678] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1192861329]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:12:42.140) (total time: 30002ms):
	Trace[1192861329]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (01:13:12.141)
	Trace[1192861329]: [30.002453353s] [30.002453353s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1081643440]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:12:42.141) (total time: 30001ms):
	Trace[1081643440]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (01:13:12.143)
	Trace[1081643440]: [30.001974511s] [30.001974511s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[573907783]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:12:42.141) (total time: 30002ms):
	Trace[573907783]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (01:13:12.143)
	Trace[573907783]: [30.002309792s] [30.002309792s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-345460
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-345460
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 01:12:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-345460
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:14:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 01:14:00 +0000   Sat, 20 Apr 2024 01:12:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 01:14:00 +0000   Sat, 20 Apr 2024 01:12:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 01:14:00 +0000   Sat, 20 Apr 2024 01:12:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 01:14:00 +0000   Sat, 20 Apr 2024 01:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.68
	  Hostname:    kubernetes-upgrade-345460
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b77f831f1a8d422e81be953f5f43184a
	  System UUID:                b77f831f-1a8d-422e-81be-953f5f43184a
	  Boot ID:                    9a136ee7-203c-47cb-a296-da0c016cd369
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2rg9m                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     85s
	  kube-system                 coredns-7db6d8ff4d-hzndz                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     85s
	  kube-system                 etcd-kubernetes-upgrade-345460                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         91s
	  kube-system                 kube-apiserver-kubernetes-upgrade-345460             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-345460    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-ww5tg                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-scheduler-kubernetes-upgrade-345460             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 83s                  kube-proxy       
	  Normal  Starting                 3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  104s (x8 over 104s)  kubelet          Node kubernetes-upgrade-345460 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 104s)  kubelet          Node kubernetes-upgrade-345460 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 104s)  kubelet          Node kubernetes-upgrade-345460 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           85s                  node-controller  Node kubernetes-upgrade-345460 event: Registered Node kubernetes-upgrade-345460 in Controller
	  Normal  Starting                 10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x8 over 10s)    kubelet          Node kubernetes-upgrade-345460 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x8 over 10s)    kubelet          Node kubernetes-upgrade-345460 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x7 over 10s)    kubelet          Node kubernetes-upgrade-345460 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10s                  kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.891277] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.066449] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069578] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.238808] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.136879] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.340760] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +5.470331] systemd-fstab-generator[729]: Ignoring "noauto" option for root device
	[  +0.080569] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.924001] systemd-fstab-generator[852]: Ignoring "noauto" option for root device
	[  +6.769440] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	[  +0.086263] kauditd_printk_skb: 97 callbacks suppressed
	[ +13.629180] kauditd_printk_skb: 21 callbacks suppressed
	[Apr20 01:13] kauditd_printk_skb: 76 callbacks suppressed
	[ +34.081555] systemd-fstab-generator[2229]: Ignoring "noauto" option for root device
	[  +0.190434] systemd-fstab-generator[2241]: Ignoring "noauto" option for root device
	[  +0.217412] systemd-fstab-generator[2256]: Ignoring "noauto" option for root device
	[  +0.195353] systemd-fstab-generator[2268]: Ignoring "noauto" option for root device
	[  +0.364831] systemd-fstab-generator[2296]: Ignoring "noauto" option for root device
	[  +4.241424] systemd-fstab-generator[2563]: Ignoring "noauto" option for root device
	[  +0.098424] kauditd_printk_skb: 123 callbacks suppressed
	[  +3.409819] systemd-fstab-generator[3068]: Ignoring "noauto" option for root device
	[Apr20 01:14] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.614848] systemd-fstab-generator[3591]: Ignoring "noauto" option for root device
	
	
	==> etcd [39279919f9447651b6e3bd391fded1960ba884729dac51f26166a1d1d7c875de] <==
	{"level":"info","ts":"2024-04-20T01:13:56.48805Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8d66edb0005f48ce","local-member-id":"6d2f42b3161e924d","added-peer-id":"6d2f42b3161e924d","added-peer-peer-urls":["https://192.168.50.68:2380"]}
	{"level":"info","ts":"2024-04-20T01:13:56.488187Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8d66edb0005f48ce","local-member-id":"6d2f42b3161e924d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:13:56.488221Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:13:56.495252Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T01:13:56.495373Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T01:13:56.49542Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T01:13:56.503728Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-20T01:13:56.504078Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6d2f42b3161e924d","initial-advertise-peer-urls":["https://192.168.50.68:2380"],"listen-peer-urls":["https://192.168.50.68:2380"],"advertise-client-urls":["https://192.168.50.68:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.68:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-20T01:13:56.504674Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-20T01:13:56.504968Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.68:2380"}
	{"level":"info","ts":"2024-04-20T01:13:56.505023Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.68:2380"}
	{"level":"info","ts":"2024-04-20T01:13:58.097675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6d2f42b3161e924d is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-20T01:13:58.097825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6d2f42b3161e924d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-20T01:13:58.097875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6d2f42b3161e924d received MsgPreVoteResp from 6d2f42b3161e924d at term 2"}
	{"level":"info","ts":"2024-04-20T01:13:58.097916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6d2f42b3161e924d became candidate at term 3"}
	{"level":"info","ts":"2024-04-20T01:13:58.09795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6d2f42b3161e924d received MsgVoteResp from 6d2f42b3161e924d at term 3"}
	{"level":"info","ts":"2024-04-20T01:13:58.097988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6d2f42b3161e924d became leader at term 3"}
	{"level":"info","ts":"2024-04-20T01:13:58.098025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6d2f42b3161e924d elected leader 6d2f42b3161e924d at term 3"}
	{"level":"info","ts":"2024-04-20T01:13:58.10093Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6d2f42b3161e924d","local-member-attributes":"{Name:kubernetes-upgrade-345460 ClientURLs:[https://192.168.50.68:2379]}","request-path":"/0/members/6d2f42b3161e924d/attributes","cluster-id":"8d66edb0005f48ce","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-20T01:13:58.101667Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:13:58.101834Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:13:58.113155Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-20T01:13:58.115788Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T01:13:58.123924Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.68:2379"}
	{"level":"info","ts":"2024-04-20T01:13:58.145281Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [95607bfe05b9aec4a2a95dae72ba81d9ddba3a633a67aa4353eb8594365795eb] <==
	{"level":"info","ts":"2024-04-20T01:12:23.192839Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:12:23.192963Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:12:23.198657Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-20T01:12:23.202881Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T01:12:23.203262Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8d66edb0005f48ce","local-member-id":"6d2f42b3161e924d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:12:23.222077Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:12:23.228504Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:12:23.228203Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.68:2379"}
	{"level":"info","ts":"2024-04-20T01:12:23.231151Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-04-20T01:13:16.88403Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.053803ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10542239503106310646 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:405 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:529 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-20T01:13:16.88493Z","caller":"traceutil/trace.go:171","msg":"trace[1409763997] linearizableReadLoop","detail":"{readStateIndex:424; appliedIndex:423; }","duration":"153.556149ms","start":"2024-04-20T01:13:16.731344Z","end":"2024-04-20T01:13:16.8849Z","steps":["trace[1409763997] 'read index received'  (duration: 1.69305ms)","trace[1409763997] 'applied index is now lower than readState.Index'  (duration: 151.861193ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-20T01:13:16.885064Z","caller":"traceutil/trace.go:171","msg":"trace[817056501] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"211.637343ms","start":"2024-04-20T01:13:16.673409Z","end":"2024-04-20T01:13:16.885047Z","steps":["trace[817056501] 'process raft request'  (duration: 59.700619ms)","trace[817056501] 'compare'  (duration: 149.716358ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-20T01:13:16.885185Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.75028ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.50.68\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-04-20T01:13:16.885245Z","caller":"traceutil/trace.go:171","msg":"trace[1219606176] range","detail":"{range_begin:/registry/masterleases/192.168.50.68; range_end:; response_count:1; response_revision:408; }","duration":"153.913145ms","start":"2024-04-20T01:13:16.731321Z","end":"2024-04-20T01:13:16.885234Z","steps":["trace[1219606176] 'agreement among raft nodes before linearized reading'  (duration: 153.733888ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T01:13:19.078698Z","caller":"traceutil/trace.go:171","msg":"trace[271729485] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"184.847216ms","start":"2024-04-20T01:13:18.893833Z","end":"2024-04-20T01:13:19.07868Z","steps":["trace[271729485] 'process raft request'  (duration: 184.306455ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T01:13:39.309731Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-20T01:13:39.309816Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"kubernetes-upgrade-345460","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.68:2380"],"advertise-client-urls":["https://192.168.50.68:2379"]}
	{"level":"warn","ts":"2024-04-20T01:13:39.309931Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-20T01:13:39.310105Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-20T01:13:39.380041Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.68:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-20T01:13:39.380182Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.68:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-20T01:13:39.380277Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6d2f42b3161e924d","current-leader-member-id":"6d2f42b3161e924d"}
	{"level":"info","ts":"2024-04-20T01:13:39.384014Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.68:2380"}
	{"level":"info","ts":"2024-04-20T01:13:39.384365Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.68:2380"}
	{"level":"info","ts":"2024-04-20T01:13:39.384494Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"kubernetes-upgrade-345460","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.68:2380"],"advertise-client-urls":["https://192.168.50.68:2379"]}
	
	
	==> kernel <==
	 01:14:05 up 2 min,  0 users,  load average: 0.87, 0.28, 0.10
	Linux kubernetes-upgrade-345460 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [435308e05c9e1f27129aee9dd97ae9d0ee0cc0c39d9b556713c08b910b2d6640] <==
	I0420 01:14:00.233140       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0420 01:14:00.335818       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0420 01:14:00.347478       1 aggregator.go:165] initial CRD sync complete...
	I0420 01:14:00.347526       1 autoregister_controller.go:141] Starting autoregister controller
	I0420 01:14:00.347534       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0420 01:14:00.367895       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0420 01:14:00.368040       1 shared_informer.go:320] Caches are synced for configmaps
	I0420 01:14:00.384908       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0420 01:14:00.384963       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0420 01:14:00.384984       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0420 01:14:00.422932       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0420 01:14:00.423656       1 policy_source.go:224] refreshing policies
	I0420 01:14:00.425997       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0420 01:14:00.426681       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0420 01:14:00.428826       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0420 01:14:00.432088       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0420 01:14:00.447666       1 cache.go:39] Caches are synced for autoregister controller
	E0420 01:14:00.480870       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0420 01:14:01.211773       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0420 01:14:02.426167       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0420 01:14:02.451924       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0420 01:14:02.502403       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0420 01:14:02.602217       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0420 01:14:02.609116       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0420 01:14:03.597386       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [6cef60b4f893b33e2557fd61a691eb6ce859313116f5596c7062ad3fce2405c0] <==
	I0420 01:13:39.326624       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0420 01:13:39.326638       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0420 01:13:39.326665       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0420 01:13:39.326683       1 naming_controller.go:302] Shutting down NamingConditionController
	I0420 01:13:39.326697       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0420 01:13:39.326712       1 controller.go:167] Shutting down OpenAPI controller
	I0420 01:13:39.326726       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0420 01:13:39.326746       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0420 01:13:39.326797       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0420 01:13:39.326811       1 establishing_controller.go:87] Shutting down EstablishingController
	I0420 01:13:39.326823       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0420 01:13:39.326833       1 available_controller.go:439] Shutting down AvailableConditionController
	I0420 01:13:39.326852       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0420 01:13:39.326897       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0420 01:13:39.326914       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0420 01:13:39.334755       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0420 01:13:39.337428       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0420 01:13:39.338011       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0420 01:13:39.338106       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0420 01:13:39.338345       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0420 01:13:39.338377       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0420 01:13:39.338411       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0420 01:13:39.338477       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0420 01:13:39.338506       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0420 01:13:39.338896       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [214db0345eb9ea03dc56feb8473a5488a19b93c3866ec17447516b9b37a93491] <==
	I0420 01:12:40.881654       1 shared_informer.go:320] Caches are synced for TTL
	I0420 01:12:40.895307       1 shared_informer.go:320] Caches are synced for crt configmap
	I0420 01:12:40.897937       1 shared_informer.go:320] Caches are synced for attach detach
	I0420 01:12:40.908257       1 shared_informer.go:320] Caches are synced for node
	I0420 01:12:40.908319       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0420 01:12:40.908336       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0420 01:12:40.908341       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0420 01:12:40.908346       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0420 01:12:40.924011       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="kubernetes-upgrade-345460" podCIDRs=["10.244.0.0/24"]
	I0420 01:12:40.946773       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.362743ms"
	I0420 01:12:40.949533       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0420 01:12:40.951278       1 shared_informer.go:320] Caches are synced for resource quota
	I0420 01:12:40.969248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.415765ms"
	I0420 01:12:40.975966       1 shared_informer.go:320] Caches are synced for resource quota
	I0420 01:12:41.022115       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.238726ms"
	I0420 01:12:41.022818       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="525.368µs"
	I0420 01:12:41.391211       1 shared_informer.go:320] Caches are synced for garbage collector
	I0420 01:12:41.394502       1 shared_informer.go:320] Caches are synced for garbage collector
	I0420 01:12:41.394787       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0420 01:12:42.216800       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="1.465674ms"
	I0420 01:12:42.256742       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.461µs"
	I0420 01:13:21.254252       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.17044ms"
	I0420 01:13:21.256277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.29µs"
	I0420 01:13:21.312340       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.415229ms"
	I0420 01:13:21.314233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="267.394µs"
	
	
	==> kube-controller-manager [5511ddce4f6a01c3ed30a144a4ba8fbe48b554bff014947c9e7eab807709cbe8] <==
	I0420 01:14:02.421730       1 shared_informer.go:320] Caches are synced for tokens
	I0420 01:14:02.426404       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0420 01:14:02.426638       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0420 01:14:02.426687       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0420 01:14:02.426694       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0420 01:14:02.432629       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0420 01:14:02.433341       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0420 01:14:02.433382       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0420 01:14:02.444937       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0420 01:14:02.445205       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0420 01:14:02.445252       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0420 01:14:02.445279       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0420 01:14:02.453216       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0420 01:14:02.453654       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0420 01:14:02.455000       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0420 01:14:02.461418       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0420 01:14:02.462273       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0420 01:14:02.462291       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0420 01:14:02.466226       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0420 01:14:02.466404       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0420 01:14:02.466429       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0420 01:14:02.466457       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0420 01:14:02.470341       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0420 01:14:02.470444       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0420 01:14:02.472497       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	
	
	==> kube-proxy [52e233b30156c0ab0612b4d383239b775e406f0532f5f673d9ab5ae66fdf88ab] <==
	I0420 01:14:01.656670       1 server_linux.go:69] "Using iptables proxy"
	I0420 01:14:01.676182       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.68"]
	I0420 01:14:01.748478       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 01:14:01.748521       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 01:14:01.748628       1 server_linux.go:165] "Using iptables Proxier"
	I0420 01:14:01.751476       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 01:14:01.751883       1 server.go:872] "Version info" version="v1.30.0"
	I0420 01:14:01.752045       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:14:01.753205       1 config.go:192] "Starting service config controller"
	I0420 01:14:01.753259       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 01:14:01.753302       1 config.go:101] "Starting endpoint slice config controller"
	I0420 01:14:01.753318       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 01:14:01.753897       1 config.go:319] "Starting node config controller"
	I0420 01:14:01.753980       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 01:14:01.853472       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 01:14:01.853654       1 shared_informer.go:320] Caches are synced for service config
	I0420 01:14:01.854107       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [770a82a274d45a95b6718a8ea9771145cdfc7e220ba1948f20b2ad1289965045] <==
	I0420 01:12:41.767856       1 server_linux.go:69] "Using iptables proxy"
	I0420 01:12:41.780869       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.68"]
	I0420 01:12:41.939723       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 01:12:41.939804       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 01:12:41.939829       1 server_linux.go:165] "Using iptables Proxier"
	I0420 01:12:41.963767       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 01:12:41.964005       1 server.go:872] "Version info" version="v1.30.0"
	I0420 01:12:41.964017       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:12:41.966006       1 config.go:192] "Starting service config controller"
	I0420 01:12:41.966019       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 01:12:41.966045       1 config.go:101] "Starting endpoint slice config controller"
	I0420 01:12:41.966050       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 01:12:41.970051       1 config.go:319] "Starting node config controller"
	I0420 01:12:41.970061       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 01:12:42.067258       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 01:12:42.067300       1 shared_informer.go:320] Caches are synced for service config
	I0420 01:12:42.070691       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [669e1873a37a1505fc7ae863748c84fa977a677c9c723612a3ff173e8b21a63c] <==
	I0420 01:13:58.520844       1 serving.go:380] Generated self-signed cert in-memory
	W0420 01:14:00.253758       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0420 01:14:00.255635       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 01:14:00.255732       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0420 01:14:00.255764       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0420 01:14:00.376314       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0420 01:14:00.376377       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:14:00.382710       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0420 01:14:00.382929       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0420 01:14:00.382981       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0420 01:14:00.383005       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0420 01:14:00.485458       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e4aa6d261137c7d171dac541ec455873854b64b204eecf3dde091ea7c567849a] <==
	E0420 01:12:25.774091       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0420 01:12:25.774921       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0420 01:12:25.774983       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0420 01:12:25.790836       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0420 01:12:25.790902       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0420 01:12:25.981651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0420 01:12:25.981848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0420 01:12:25.991707       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 01:12:25.991905       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 01:12:25.997669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0420 01:12:25.997718       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0420 01:12:26.074254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 01:12:26.074342       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 01:12:26.139415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 01:12:26.139482       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0420 01:12:26.146854       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 01:12:26.146922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 01:12:26.252086       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 01:12:26.252111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 01:12:26.263810       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0420 01:12:26.263862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0420 01:12:28.801888       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0420 01:13:39.308536       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0420 01:13:39.308814       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0420 01:13:39.309119       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 20 01:13:55 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:13:55.875814    3075 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/08d520bb0f1a9c6e38541787795aa99a-etcd-certs\") pod \"etcd-kubernetes-upgrade-345460\" (UID: \"08d520bb0f1a9c6e38541787795aa99a\") " pod="kube-system/etcd-kubernetes-upgrade-345460"
	Apr 20 01:13:55 kubernetes-upgrade-345460 kubelet[3075]: E0420 01:13:55.878802    3075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-345460?timeout=10s\": dial tcp 192.168.50.68:8443: connect: connection refused" interval="400ms"
	Apr 20 01:13:55 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:13:55.932702    3075 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-345460"
	Apr 20 01:13:55 kubernetes-upgrade-345460 kubelet[3075]: E0420 01:13:55.934342    3075 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.68:8443: connect: connection refused" node="kubernetes-upgrade-345460"
	Apr 20 01:13:56 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:13:56.054709    3075 scope.go:117] "RemoveContainer" containerID="e4aa6d261137c7d171dac541ec455873854b64b204eecf3dde091ea7c567849a"
	Apr 20 01:13:56 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:13:56.067842    3075 scope.go:117] "RemoveContainer" containerID="214db0345eb9ea03dc56feb8473a5488a19b93c3866ec17447516b9b37a93491"
	Apr 20 01:13:56 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:13:56.093098    3075 scope.go:117] "RemoveContainer" containerID="95607bfe05b9aec4a2a95dae72ba81d9ddba3a633a67aa4353eb8594365795eb"
	Apr 20 01:13:56 kubernetes-upgrade-345460 kubelet[3075]: E0420 01:13:56.280081    3075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-345460?timeout=10s\": dial tcp 192.168.50.68:8443: connect: connection refused" interval="800ms"
	Apr 20 01:13:56 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:13:56.337696    3075 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-345460"
	Apr 20 01:13:56 kubernetes-upgrade-345460 kubelet[3075]: E0420 01:13:56.339137    3075 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.68:8443: connect: connection refused" node="kubernetes-upgrade-345460"
	Apr 20 01:13:57 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:13:57.141679    3075 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-345460"
	Apr 20 01:14:00 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:14:00.470175    3075 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-345460"
	Apr 20 01:14:00 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:14:00.470868    3075 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-345460"
	Apr 20 01:14:00 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:14:00.472735    3075 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 20 01:14:00 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:14:00.474278    3075 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 20 01:14:00 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:14:00.607688    3075 apiserver.go:52] "Watching apiserver"
	Apr 20 01:14:00 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:14:00.610649    3075 topology_manager.go:215] "Topology Admit Handler" podUID="6578494d-6974-4d36-be38-e29cd4360d54" podNamespace="kube-system" podName="storage-provisioner"
	Apr 20 01:14:00 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:14:00.611628    3075 topology_manager.go:215] "Topology Admit Handler" podUID="d5469f00-e1bf-442d-a31e-c567dd8057df" podNamespace="kube-system" podName="kube-proxy-ww5tg"
	Apr 20 01:14:00 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:14:00.613040    3075 topology_manager.go:215] "Topology Admit Handler" podUID="8b4b1d84-0d3a-46fa-a156-bcd858000abf" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2rg9m"
	Apr 20 01:14:00 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:14:00.613426    3075 topology_manager.go:215] "Topology Admit Handler" podUID="d9126f8a-d858-468a-96b4-347ae95b8ea6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hzndz"
	Apr 20 01:14:00 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:14:00.622762    3075 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 20 01:14:00 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:14:00.659022    3075 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5469f00-e1bf-442d-a31e-c567dd8057df-lib-modules\") pod \"kube-proxy-ww5tg\" (UID: \"d5469f00-e1bf-442d-a31e-c567dd8057df\") " pod="kube-system/kube-proxy-ww5tg"
	Apr 20 01:14:00 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:14:00.659135    3075 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6578494d-6974-4d36-be38-e29cd4360d54-tmp\") pod \"storage-provisioner\" (UID: \"6578494d-6974-4d36-be38-e29cd4360d54\") " pod="kube-system/storage-provisioner"
	Apr 20 01:14:00 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:14:00.659186    3075 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5469f00-e1bf-442d-a31e-c567dd8057df-xtables-lock\") pod \"kube-proxy-ww5tg\" (UID: \"d5469f00-e1bf-442d-a31e-c567dd8057df\") " pod="kube-system/kube-proxy-ww5tg"
	Apr 20 01:14:02 kubernetes-upgrade-345460 kubelet[3075]: I0420 01:14:02.912222    3075 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [409a2ff29fe002495cb6ff266a8f59fed390a8916b6dd3d24e49bf3f4eacc1e8] <==
	I0420 01:13:12.575070       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0420 01:13:12.592912       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0420 01:13:12.593033       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0420 01:13:12.627526       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0420 01:13:12.628221       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"203adef3-012a-47bb-b8d5-01ca7ccc62ca", APIVersion:"v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-345460_5d0b199a-f0f2-495b-8175-c7d6eef5ac8f became leader
	I0420 01:13:12.630609       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-345460_5d0b199a-f0f2-495b-8175-c7d6eef5ac8f!
	I0420 01:13:12.731050       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-345460_5d0b199a-f0f2-495b-8175-c7d6eef5ac8f!
	
	
	==> storage-provisioner [cfa50eaf2f9e436e759b1652b6514e374b7463523e36c8974d7aa6bc6010ee6b] <==
	I0420 01:14:01.534376       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0420 01:14:01.570944       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0420 01:14:01.574479       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 01:14:04.292660  131102 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18703-76456/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-345460 -n kubernetes-upgrade-345460
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-345460 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-345460" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-345460
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-345460: (1.190248276s)
--- FAIL: TestKubernetesUpgrade (443.94s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (56.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-680144 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-680144 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.261732666s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-680144] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-680144" primary control-plane node in "pause-680144" cluster
	* Updating the running kvm2 "pause-680144" VM ...
	* Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-680144" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 01:11:00.890617  126402 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:11:00.890776  126402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:11:00.890787  126402 out.go:304] Setting ErrFile to fd 2...
	I0420 01:11:00.890793  126402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:11:00.890982  126402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:11:00.891541  126402 out.go:298] Setting JSON to false
	I0420 01:11:00.892517  126402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14008,"bootTime":1713561453,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 01:11:00.892579  126402 start.go:139] virtualization: kvm guest
	I0420 01:11:00.894992  126402 out.go:177] * [pause-680144] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 01:11:00.896252  126402 notify.go:220] Checking for updates...
	I0420 01:11:00.897488  126402 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:11:00.899008  126402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:11:00.900486  126402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:11:00.901899  126402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:11:00.903085  126402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 01:11:00.904246  126402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:11:00.906016  126402 config.go:182] Loaded profile config "pause-680144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:11:00.906539  126402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:11:00.906589  126402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:11:00.922165  126402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44797
	I0420 01:11:00.922725  126402 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:11:00.923320  126402 main.go:141] libmachine: Using API Version  1
	I0420 01:11:00.923347  126402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:11:00.923718  126402 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:11:00.923937  126402 main.go:141] libmachine: (pause-680144) Calling .DriverName
	I0420 01:11:00.924193  126402 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:11:00.924558  126402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:11:00.924612  126402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:11:00.939436  126402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I0420 01:11:00.939864  126402 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:11:00.940328  126402 main.go:141] libmachine: Using API Version  1
	I0420 01:11:00.940350  126402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:11:00.940658  126402 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:11:00.940875  126402 main.go:141] libmachine: (pause-680144) Calling .DriverName
	I0420 01:11:00.979213  126402 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 01:11:00.980449  126402 start.go:297] selected driver: kvm2
	I0420 01:11:00.980466  126402 start.go:901] validating driver "kvm2" against &{Name:pause-680144 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-680144 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:11:00.980650  126402 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:11:00.981114  126402 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:11:00.981197  126402 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 01:11:00.996192  126402 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 01:11:00.997638  126402 cni.go:84] Creating CNI manager for ""
	I0420 01:11:00.997686  126402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:11:00.997878  126402 start.go:340] cluster config:
	{Name:pause-680144 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-680144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false stor
age-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:11:00.998142  126402 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:11:00.999879  126402 out.go:177] * Starting "pause-680144" primary control-plane node in "pause-680144" cluster
	I0420 01:11:01.001086  126402 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:11:01.001128  126402 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0420 01:11:01.001142  126402 cache.go:56] Caching tarball of preloaded images
	I0420 01:11:01.001254  126402 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 01:11:01.001270  126402 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 01:11:01.001432  126402 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/pause-680144/config.json ...
	I0420 01:11:01.001621  126402 start.go:360] acquireMachinesLock for pause-680144: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:11:01.001661  126402 start.go:364] duration metric: took 22.798µs to acquireMachinesLock for "pause-680144"
	I0420 01:11:01.001675  126402 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:11:01.001682  126402 fix.go:54] fixHost starting: 
	I0420 01:11:01.001940  126402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:11:01.001963  126402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:11:01.018482  126402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45275
	I0420 01:11:01.018967  126402 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:11:01.019532  126402 main.go:141] libmachine: Using API Version  1
	I0420 01:11:01.019552  126402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:11:01.019889  126402 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:11:01.020093  126402 main.go:141] libmachine: (pause-680144) Calling .DriverName
	I0420 01:11:01.020257  126402 main.go:141] libmachine: (pause-680144) Calling .GetState
	I0420 01:11:01.022167  126402 fix.go:112] recreateIfNeeded on pause-680144: state=Running err=<nil>
	W0420 01:11:01.022202  126402 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:11:01.024095  126402 out.go:177] * Updating the running kvm2 "pause-680144" VM ...
	I0420 01:11:01.025374  126402 machine.go:94] provisionDockerMachine start ...
	I0420 01:11:01.025399  126402 main.go:141] libmachine: (pause-680144) Calling .DriverName
	I0420 01:11:01.025580  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHHostname
	I0420 01:11:01.027975  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:01.028497  126402 main.go:141] libmachine: (pause-680144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:7c:8b", ip: ""} in network mk-pause-680144: {Iface:virbr2 ExpiryTime:2024-04-20 02:09:37 +0000 UTC Type:0 Mac:52:54:00:65:7c:8b Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:pause-680144 Clientid:01:52:54:00:65:7c:8b}
	I0420 01:11:01.028523  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined IP address 192.168.72.180 and MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:01.028678  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHPort
	I0420 01:11:01.028879  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHKeyPath
	I0420 01:11:01.029094  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHKeyPath
	I0420 01:11:01.029272  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHUsername
	I0420 01:11:01.029513  126402 main.go:141] libmachine: Using SSH client type: native
	I0420 01:11:01.029790  126402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I0420 01:11:01.029807  126402 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:11:01.146619  126402 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-680144
	
	I0420 01:11:01.146646  126402 main.go:141] libmachine: (pause-680144) Calling .GetMachineName
	I0420 01:11:01.146914  126402 buildroot.go:166] provisioning hostname "pause-680144"
	I0420 01:11:01.146940  126402 main.go:141] libmachine: (pause-680144) Calling .GetMachineName
	I0420 01:11:01.147121  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHHostname
	I0420 01:11:01.150178  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:01.150606  126402 main.go:141] libmachine: (pause-680144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:7c:8b", ip: ""} in network mk-pause-680144: {Iface:virbr2 ExpiryTime:2024-04-20 02:09:37 +0000 UTC Type:0 Mac:52:54:00:65:7c:8b Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:pause-680144 Clientid:01:52:54:00:65:7c:8b}
	I0420 01:11:01.150635  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined IP address 192.168.72.180 and MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:01.150849  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHPort
	I0420 01:11:01.151043  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHKeyPath
	I0420 01:11:01.151220  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHKeyPath
	I0420 01:11:01.151368  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHUsername
	I0420 01:11:01.151520  126402 main.go:141] libmachine: Using SSH client type: native
	I0420 01:11:01.151674  126402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I0420 01:11:01.151687  126402 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-680144 && echo "pause-680144" | sudo tee /etc/hostname
	I0420 01:11:01.292974  126402 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-680144
	
	I0420 01:11:01.293007  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHHostname
	I0420 01:11:01.296421  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:01.296899  126402 main.go:141] libmachine: (pause-680144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:7c:8b", ip: ""} in network mk-pause-680144: {Iface:virbr2 ExpiryTime:2024-04-20 02:09:37 +0000 UTC Type:0 Mac:52:54:00:65:7c:8b Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:pause-680144 Clientid:01:52:54:00:65:7c:8b}
	I0420 01:11:01.296932  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined IP address 192.168.72.180 and MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:01.297208  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHPort
	I0420 01:11:01.297456  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHKeyPath
	I0420 01:11:01.297632  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHKeyPath
	I0420 01:11:01.297910  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHUsername
	I0420 01:11:01.298135  126402 main.go:141] libmachine: Using SSH client type: native
	I0420 01:11:01.298321  126402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I0420 01:11:01.298339  126402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-680144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-680144/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-680144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:11:01.415864  126402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:11:01.415902  126402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:11:01.415948  126402 buildroot.go:174] setting up certificates
	I0420 01:11:01.415966  126402 provision.go:84] configureAuth start
	I0420 01:11:01.415984  126402 main.go:141] libmachine: (pause-680144) Calling .GetMachineName
	I0420 01:11:01.416305  126402 main.go:141] libmachine: (pause-680144) Calling .GetIP
	I0420 01:11:01.419205  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:01.419563  126402 main.go:141] libmachine: (pause-680144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:7c:8b", ip: ""} in network mk-pause-680144: {Iface:virbr2 ExpiryTime:2024-04-20 02:09:37 +0000 UTC Type:0 Mac:52:54:00:65:7c:8b Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:pause-680144 Clientid:01:52:54:00:65:7c:8b}
	I0420 01:11:01.419603  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined IP address 192.168.72.180 and MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:01.419822  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHHostname
	I0420 01:11:01.422246  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:01.422588  126402 main.go:141] libmachine: (pause-680144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:7c:8b", ip: ""} in network mk-pause-680144: {Iface:virbr2 ExpiryTime:2024-04-20 02:09:37 +0000 UTC Type:0 Mac:52:54:00:65:7c:8b Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:pause-680144 Clientid:01:52:54:00:65:7c:8b}
	I0420 01:11:01.422627  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined IP address 192.168.72.180 and MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:01.422799  126402 provision.go:143] copyHostCerts
	I0420 01:11:01.422880  126402 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:11:01.422893  126402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:11:01.422965  126402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:11:01.423088  126402 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:11:01.423098  126402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:11:01.423129  126402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:11:01.423208  126402 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:11:01.423220  126402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:11:01.423246  126402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:11:01.423325  126402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.pause-680144 san=[127.0.0.1 192.168.72.180 localhost minikube pause-680144]
	I0420 01:11:01.654812  126402 provision.go:177] copyRemoteCerts
	I0420 01:11:01.654870  126402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:11:01.654894  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHHostname
	I0420 01:11:01.657869  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:01.658294  126402 main.go:141] libmachine: (pause-680144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:7c:8b", ip: ""} in network mk-pause-680144: {Iface:virbr2 ExpiryTime:2024-04-20 02:09:37 +0000 UTC Type:0 Mac:52:54:00:65:7c:8b Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:pause-680144 Clientid:01:52:54:00:65:7c:8b}
	I0420 01:11:01.658329  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined IP address 192.168.72.180 and MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:01.658508  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHPort
	I0420 01:11:01.658702  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHKeyPath
	I0420 01:11:01.658885  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHUsername
	I0420 01:11:01.658996  126402 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/pause-680144/id_rsa Username:docker}
	I0420 01:11:01.750066  126402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:11:01.780961  126402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0420 01:11:01.810438  126402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 01:11:01.841523  126402 provision.go:87] duration metric: took 425.540998ms to configureAuth
	I0420 01:11:01.841551  126402 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:11:01.841759  126402 config.go:182] Loaded profile config "pause-680144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:11:01.841882  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHHostname
	I0420 01:11:01.844996  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:01.845379  126402 main.go:141] libmachine: (pause-680144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:7c:8b", ip: ""} in network mk-pause-680144: {Iface:virbr2 ExpiryTime:2024-04-20 02:09:37 +0000 UTC Type:0 Mac:52:54:00:65:7c:8b Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:pause-680144 Clientid:01:52:54:00:65:7c:8b}
	I0420 01:11:01.845409  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined IP address 192.168.72.180 and MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:01.845592  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHPort
	I0420 01:11:01.845831  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHKeyPath
	I0420 01:11:01.846013  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHKeyPath
	I0420 01:11:01.846203  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHUsername
	I0420 01:11:01.846396  126402 main.go:141] libmachine: Using SSH client type: native
	I0420 01:11:01.846629  126402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I0420 01:11:01.846655  126402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:11:07.471491  126402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:11:07.471524  126402 machine.go:97] duration metric: took 6.446130346s to provisionDockerMachine
	I0420 01:11:07.471538  126402 start.go:293] postStartSetup for "pause-680144" (driver="kvm2")
	I0420 01:11:07.471552  126402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:11:07.471572  126402 main.go:141] libmachine: (pause-680144) Calling .DriverName
	I0420 01:11:07.471957  126402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:11:07.472000  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHHostname
	I0420 01:11:07.474954  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:07.475335  126402 main.go:141] libmachine: (pause-680144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:7c:8b", ip: ""} in network mk-pause-680144: {Iface:virbr2 ExpiryTime:2024-04-20 02:09:37 +0000 UTC Type:0 Mac:52:54:00:65:7c:8b Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:pause-680144 Clientid:01:52:54:00:65:7c:8b}
	I0420 01:11:07.475365  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined IP address 192.168.72.180 and MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:07.475583  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHPort
	I0420 01:11:07.475854  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHKeyPath
	I0420 01:11:07.476083  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHUsername
	I0420 01:11:07.476256  126402 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/pause-680144/id_rsa Username:docker}
	I0420 01:11:07.565169  126402 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:11:07.570107  126402 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:11:07.570137  126402 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:11:07.570215  126402 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:11:07.570311  126402 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:11:07.570407  126402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:11:07.581101  126402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:11:07.607295  126402 start.go:296] duration metric: took 135.740096ms for postStartSetup
	I0420 01:11:07.607339  126402 fix.go:56] duration metric: took 6.605655948s for fixHost
	I0420 01:11:07.607367  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHHostname
	I0420 01:11:07.610401  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:07.610735  126402 main.go:141] libmachine: (pause-680144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:7c:8b", ip: ""} in network mk-pause-680144: {Iface:virbr2 ExpiryTime:2024-04-20 02:09:37 +0000 UTC Type:0 Mac:52:54:00:65:7c:8b Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:pause-680144 Clientid:01:52:54:00:65:7c:8b}
	I0420 01:11:07.610772  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined IP address 192.168.72.180 and MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:07.610914  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHPort
	I0420 01:11:07.611146  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHKeyPath
	I0420 01:11:07.611290  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHKeyPath
	I0420 01:11:07.611418  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHUsername
	I0420 01:11:07.611595  126402 main.go:141] libmachine: Using SSH client type: native
	I0420 01:11:07.611803  126402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I0420 01:11:07.611819  126402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0420 01:11:07.734258  126402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713575467.716435321
	
	I0420 01:11:07.734282  126402 fix.go:216] guest clock: 1713575467.716435321
	I0420 01:11:07.734291  126402 fix.go:229] Guest: 2024-04-20 01:11:07.716435321 +0000 UTC Remote: 2024-04-20 01:11:07.607345478 +0000 UTC m=+6.768798305 (delta=109.089843ms)
	I0420 01:11:07.734316  126402 fix.go:200] guest clock delta is within tolerance: 109.089843ms
	I0420 01:11:07.734323  126402 start.go:83] releasing machines lock for "pause-680144", held for 6.732652028s
	I0420 01:11:07.734344  126402 main.go:141] libmachine: (pause-680144) Calling .DriverName
	I0420 01:11:07.734638  126402 main.go:141] libmachine: (pause-680144) Calling .GetIP
	I0420 01:11:07.737396  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:07.737905  126402 main.go:141] libmachine: (pause-680144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:7c:8b", ip: ""} in network mk-pause-680144: {Iface:virbr2 ExpiryTime:2024-04-20 02:09:37 +0000 UTC Type:0 Mac:52:54:00:65:7c:8b Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:pause-680144 Clientid:01:52:54:00:65:7c:8b}
	I0420 01:11:07.737936  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined IP address 192.168.72.180 and MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:07.738093  126402 main.go:141] libmachine: (pause-680144) Calling .DriverName
	I0420 01:11:07.738834  126402 main.go:141] libmachine: (pause-680144) Calling .DriverName
	I0420 01:11:07.739016  126402 main.go:141] libmachine: (pause-680144) Calling .DriverName
	I0420 01:11:07.739105  126402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:11:07.739145  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHHostname
	I0420 01:11:07.739273  126402 ssh_runner.go:195] Run: cat /version.json
	I0420 01:11:07.739302  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHHostname
	I0420 01:11:07.741819  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:07.742181  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:07.742209  126402 main.go:141] libmachine: (pause-680144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:7c:8b", ip: ""} in network mk-pause-680144: {Iface:virbr2 ExpiryTime:2024-04-20 02:09:37 +0000 UTC Type:0 Mac:52:54:00:65:7c:8b Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:pause-680144 Clientid:01:52:54:00:65:7c:8b}
	I0420 01:11:07.742225  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined IP address 192.168.72.180 and MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:07.742375  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHPort
	I0420 01:11:07.742529  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHKeyPath
	I0420 01:11:07.742623  126402 main.go:141] libmachine: (pause-680144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:7c:8b", ip: ""} in network mk-pause-680144: {Iface:virbr2 ExpiryTime:2024-04-20 02:09:37 +0000 UTC Type:0 Mac:52:54:00:65:7c:8b Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:pause-680144 Clientid:01:52:54:00:65:7c:8b}
	I0420 01:11:07.742645  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined IP address 192.168.72.180 and MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:07.742668  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHUsername
	I0420 01:11:07.742832  126402 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/pause-680144/id_rsa Username:docker}
	I0420 01:11:07.742857  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHPort
	I0420 01:11:07.743034  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHKeyPath
	I0420 01:11:07.743195  126402 main.go:141] libmachine: (pause-680144) Calling .GetSSHUsername
	I0420 01:11:07.743362  126402 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/pause-680144/id_rsa Username:docker}
	I0420 01:11:07.844751  126402 ssh_runner.go:195] Run: systemctl --version
	I0420 01:11:07.852522  126402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:11:08.017107  126402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:11:08.024716  126402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:11:08.024798  126402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:11:08.037527  126402 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0420 01:11:08.037567  126402 start.go:494] detecting cgroup driver to use...
	I0420 01:11:08.037635  126402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:11:08.060294  126402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:11:08.077481  126402 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:11:08.077548  126402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:11:08.096269  126402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:11:08.115237  126402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:11:08.252716  126402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:11:08.402649  126402 docker.go:233] disabling docker service ...
	I0420 01:11:08.402740  126402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:11:08.424230  126402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:11:08.444244  126402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:11:08.595494  126402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:11:08.752394  126402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:11:08.770313  126402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:11:08.791628  126402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:11:08.791697  126402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:11:08.807340  126402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:11:08.807401  126402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:11:08.820528  126402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:11:08.834770  126402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:11:08.848849  126402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:11:08.863052  126402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:11:08.876157  126402 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:11:08.888655  126402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:11:08.903170  126402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:11:08.915501  126402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:11:08.928907  126402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:11:09.079506  126402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:11:16.087669  126402 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.008116154s)
	I0420 01:11:16.087701  126402 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:11:16.087751  126402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:11:16.094502  126402 start.go:562] Will wait 60s for crictl version
	I0420 01:11:16.094562  126402 ssh_runner.go:195] Run: which crictl
	I0420 01:11:16.099531  126402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:11:16.143354  126402 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:11:16.143458  126402 ssh_runner.go:195] Run: crio --version
	I0420 01:11:16.175204  126402 ssh_runner.go:195] Run: crio --version
	I0420 01:11:16.208437  126402 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:11:16.209935  126402 main.go:141] libmachine: (pause-680144) Calling .GetIP
	I0420 01:11:16.212607  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:16.213001  126402 main.go:141] libmachine: (pause-680144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:7c:8b", ip: ""} in network mk-pause-680144: {Iface:virbr2 ExpiryTime:2024-04-20 02:09:37 +0000 UTC Type:0 Mac:52:54:00:65:7c:8b Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:pause-680144 Clientid:01:52:54:00:65:7c:8b}
	I0420 01:11:16.213032  126402 main.go:141] libmachine: (pause-680144) DBG | domain pause-680144 has defined IP address 192.168.72.180 and MAC address 52:54:00:65:7c:8b in network mk-pause-680144
	I0420 01:11:16.213202  126402 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0420 01:11:16.218780  126402 kubeadm.go:877] updating cluster {Name:pause-680144 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-680144 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false
registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:11:16.218912  126402 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:11:16.218968  126402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:11:16.264263  126402 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:11:16.264289  126402 crio.go:433] Images already preloaded, skipping extraction
	I0420 01:11:16.264350  126402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:11:16.304536  126402 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:11:16.304568  126402 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:11:16.304580  126402 kubeadm.go:928] updating node { 192.168.72.180 8443 v1.30.0 crio true true} ...
	I0420 01:11:16.304730  126402 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-680144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:pause-680144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:11:16.304817  126402 ssh_runner.go:195] Run: crio config
	I0420 01:11:16.371217  126402 cni.go:84] Creating CNI manager for ""
	I0420 01:11:16.371241  126402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:11:16.371254  126402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:11:16.371275  126402 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-680144 NodeName:pause-680144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:11:16.371417  126402 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-680144"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:11:16.371480  126402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:11:16.382742  126402 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:11:16.382799  126402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:11:16.393496  126402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0420 01:11:16.413715  126402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:11:16.433558  126402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0420 01:11:16.452780  126402 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I0420 01:11:16.457624  126402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:11:16.602650  126402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:11:16.622494  126402 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/pause-680144 for IP: 192.168.72.180
	I0420 01:11:16.622515  126402 certs.go:194] generating shared ca certs ...
	I0420 01:11:16.622535  126402 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:11:16.622730  126402 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:11:16.622807  126402 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:11:16.622816  126402 certs.go:256] generating profile certs ...
	I0420 01:11:16.622912  126402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/pause-680144/client.key
	I0420 01:11:16.622974  126402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/pause-680144/apiserver.key.2cb64d1b
	I0420 01:11:16.623014  126402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/pause-680144/proxy-client.key
	I0420 01:11:16.623142  126402 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:11:16.623169  126402 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:11:16.623178  126402 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:11:16.623200  126402 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:11:16.623220  126402 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:11:16.623242  126402 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:11:16.623278  126402 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:11:16.623966  126402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:11:16.651843  126402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:11:16.679441  126402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:11:16.706604  126402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:11:16.736804  126402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/pause-680144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0420 01:11:16.765621  126402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/pause-680144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0420 01:11:16.794173  126402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/pause-680144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:11:16.822340  126402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/pause-680144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0420 01:11:16.931853  126402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:11:17.104671  126402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:11:17.264605  126402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:11:17.443175  126402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:11:17.638513  126402 ssh_runner.go:195] Run: openssl version
	I0420 01:11:17.672524  126402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:11:17.747570  126402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:11:17.753284  126402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:11:17.753373  126402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:11:17.763332  126402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:11:17.820441  126402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:11:17.863735  126402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:11:17.883124  126402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:11:17.883196  126402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:11:17.890898  126402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:11:17.944922  126402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:11:17.979964  126402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:11:17.989095  126402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:11:17.989164  126402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:11:17.997001  126402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:11:18.012461  126402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:11:18.021522  126402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:11:18.032011  126402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:11:18.044511  126402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:11:18.051382  126402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:11:18.059234  126402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:11:18.066749  126402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:11:18.089322  126402 kubeadm.go:391] StartCluster: {Name:pause-680144 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-680144 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:11:18.089508  126402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:11:18.089599  126402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:11:18.211324  126402 cri.go:89] found id: "2ce651de3c308355c6e3bc9b00a084e29c5ec8675ac179ab77f2039a1ce31980"
	I0420 01:11:18.211348  126402 cri.go:89] found id: "2d5a96f57a7369e2f12aeed9290333c2de48d98ad0fa87296ed60d6d23892d5b"
	I0420 01:11:18.211354  126402 cri.go:89] found id: "b10d8cec2caed1534aac8ce051bbc0f03ecaab1265f0ac3e1d754dc8d96061af"
	I0420 01:11:18.211358  126402 cri.go:89] found id: "12f76e07e9fd0a694df3fa853e27f3c0ebf6d407cfbb7593d46be1cbd277cb2c"
	I0420 01:11:18.211362  126402 cri.go:89] found id: "1e88e57cc3c7b1a143078c19ab3ff4bb0fc4c078aeb8551cba7d13e089e4e2de"
	I0420 01:11:18.211366  126402 cri.go:89] found id: "59845b36c60dcd90fbf451ae7f951c66655e974a6754e09549e907c0f0209176"
	I0420 01:11:18.211369  126402 cri.go:89] found id: "20304d589a8b235fef4dd6e4507af410b8ccdbd7bda90493dea5c96e4ef2b19f"
	I0420 01:11:18.211373  126402 cri.go:89] found id: "551de37cbb8060b76f31520f53850dad037815930ada7b5f64f3ebe39b643432"
	I0420 01:11:18.211377  126402 cri.go:89] found id: "edec16b9847131ef6dcc2ef4e9520312c793ec084a8c3aff05d30db18581ffe2"
	I0420 01:11:18.211384  126402 cri.go:89] found id: "394502d59524e1efbe5949275986c0028f7ef7ace6408d9c0e9a9dc09048004f"
	I0420 01:11:18.211388  126402 cri.go:89] found id: ""
	I0420 01:11:18.211440  126402 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-680144 -n pause-680144
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-680144 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-680144 logs -n 25: (1.780117273s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-692221      | cert-expiration-692221    | jenkins | v1.33.0 | 20 Apr 24 01:08 UTC | 20 Apr 24 01:09 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h        |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-872162 stop    | minikube                  | jenkins | v1.26.0 | 20 Apr 24 01:08 UTC | 20 Apr 24 01:08 UTC |
	| start   | -p stopped-upgrade-872162      | stopped-upgrade-872162    | jenkins | v1.33.0 | 20 Apr 24 01:08 UTC | 20 Apr 24 01:09 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-981367      | running-upgrade-981367    | jenkins | v1.33.0 | 20 Apr 24 01:09 UTC | 20 Apr 24 01:09 UTC |
	| start   | -p pause-680144 --memory=2048  | pause-680144              | jenkins | v1.33.0 | 20 Apr 24 01:09 UTC | 20 Apr 24 01:11 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-692221      | cert-expiration-692221    | jenkins | v1.33.0 | 20 Apr 24 01:09 UTC | 20 Apr 24 01:09 UTC |
	| start   | -p auto-831611 --memory=3072   | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:09 UTC | 20 Apr 24 01:11 UTC |
	|         | --alsologtostderr --wait=true  |                           |         |         |                     |                     |
	|         | --wait-timeout=15m             |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-872162      | stopped-upgrade-872162    | jenkins | v1.33.0 | 20 Apr 24 01:09 UTC | 20 Apr 24 01:09 UTC |
	| start   | -p bridge-831611 --memory=3072 | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:09 UTC |                     |
	|         | --alsologtostderr --wait=true  |                           |         |         |                     |                     |
	|         | --wait-timeout=15m             |                           |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2     |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-680144                | pause-680144              | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 pgrep -a        | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | kubelet                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-345460   | kubernetes-upgrade-345460 | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	| start   | -p kubernetes-upgrade-345460   | kubernetes-upgrade-345460 | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo cat        | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | /etc/nsswitch.conf             |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo cat        | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | /etc/hosts                     |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo cat        | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | /etc/resolv.conf               |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo crictl     | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | pods                           |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo crictl ps  | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | --all                          |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo find       | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | /etc/cni -type f -exec sh -c   |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;           |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo ip a s     | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	| ssh     | -p auto-831611 sudo ip r s     | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	| ssh     | -p auto-831611 sudo            | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | iptables-save                  |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo iptables   | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | -t nat -L -n -v                |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo systemctl  | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | status kubelet --all --full    |                           |         |         |                     |                     |
	|         | --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo journalctl | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC |                     |
	|         | -xeu kubelet --all --full      |                           |         |         |                     |                     |
	|         | --no-pager                     |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 01:11:45
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 01:11:45.736759  126811 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:11:45.737023  126811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:11:45.737036  126811 out.go:304] Setting ErrFile to fd 2...
	I0420 01:11:45.737043  126811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:11:45.737297  126811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:11:45.737982  126811 out.go:298] Setting JSON to false
	I0420 01:11:45.738873  126811 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14053,"bootTime":1713561453,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 01:11:45.738935  126811 start.go:139] virtualization: kvm guest
	I0420 01:11:45.741464  126811 out.go:177] * [kubernetes-upgrade-345460] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 01:11:45.742900  126811 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:11:45.742948  126811 notify.go:220] Checking for updates...
	I0420 01:11:45.744243  126811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:11:45.745616  126811 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:11:45.747001  126811 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:11:45.748399  126811 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 01:11:45.749687  126811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:11:45.751369  126811 config.go:182] Loaded profile config "kubernetes-upgrade-345460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:11:45.751805  126811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:11:45.751869  126811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:11:45.766642  126811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45239
	I0420 01:11:45.767026  126811 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:11:45.767534  126811 main.go:141] libmachine: Using API Version  1
	I0420 01:11:45.767558  126811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:11:45.767864  126811 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:11:45.768043  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:11:45.768271  126811 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:11:45.768593  126811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:11:45.768636  126811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:11:45.782928  126811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36141
	I0420 01:11:45.783403  126811 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:11:45.783926  126811 main.go:141] libmachine: Using API Version  1
	I0420 01:11:45.783967  126811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:11:45.784266  126811 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:11:45.784495  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:11:45.820623  126811 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 01:11:45.822310  126811 start.go:297] selected driver: kvm2
	I0420 01:11:45.822329  126811 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-345460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-up
grade-345460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:11:45.822459  126811 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:11:45.823398  126811 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:11:45.823479  126811 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 01:11:45.838764  126811 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 01:11:45.839203  126811 cni.go:84] Creating CNI manager for ""
	I0420 01:11:45.839227  126811 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:11:45.839280  126811 start.go:340] cluster config:
	{Name:kubernetes-upgrade-345460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-345460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:11:45.839379  126811 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:11:45.841340  126811 out.go:177] * Starting "kubernetes-upgrade-345460" primary control-plane node in "kubernetes-upgrade-345460" cluster
	I0420 01:11:45.842603  126811 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:11:45.842640  126811 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0420 01:11:45.842651  126811 cache.go:56] Caching tarball of preloaded images
	I0420 01:11:45.842728  126811 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 01:11:45.842738  126811 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 01:11:45.842853  126811 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/config.json ...
	I0420 01:11:45.843082  126811 start.go:360] acquireMachinesLock for kubernetes-upgrade-345460: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:11:45.843137  126811 start.go:364] duration metric: took 35.038µs to acquireMachinesLock for "kubernetes-upgrade-345460"
	I0420 01:11:45.843157  126811 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:11:45.843163  126811 fix.go:54] fixHost starting: 
	I0420 01:11:45.843917  126811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:11:45.843966  126811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:11:45.860884  126811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35125
	I0420 01:11:45.861396  126811 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:11:45.861896  126811 main.go:141] libmachine: Using API Version  1
	I0420 01:11:45.861913  126811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:11:45.862269  126811 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:11:45.862456  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:11:45.862616  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetState
	I0420 01:11:45.864219  126811 fix.go:112] recreateIfNeeded on kubernetes-upgrade-345460: state=Stopped err=<nil>
	I0420 01:11:45.864240  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	W0420 01:11:45.864408  126811 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:11:45.866292  126811 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-345460" ...
	I0420 01:11:42.527680  126402 pod_ready.go:102] pod "etcd-pause-680144" in "kube-system" namespace has status "Ready":"False"
	I0420 01:11:45.022210  126402 pod_ready.go:102] pod "etcd-pause-680144" in "kube-system" namespace has status "Ready":"False"
	I0420 01:11:45.328178  125761 pod_ready.go:102] pod "coredns-7db6d8ff4d-7jz9v" in "kube-system" namespace has status "Ready":"False"
	I0420 01:11:47.329061  125761 pod_ready.go:102] pod "coredns-7db6d8ff4d-7jz9v" in "kube-system" namespace has status "Ready":"False"
	I0420 01:11:47.521963  126402 pod_ready.go:102] pod "etcd-pause-680144" in "kube-system" namespace has status "Ready":"False"
	I0420 01:11:48.021387  126402 pod_ready.go:92] pod "etcd-pause-680144" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:48.021411  126402 pod_ready.go:81] duration metric: took 12.007987234s for pod "etcd-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.021421  126402 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.026793  126402 pod_ready.go:92] pod "kube-apiserver-pause-680144" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:48.026818  126402 pod_ready.go:81] duration metric: took 5.389306ms for pod "kube-apiserver-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.026833  126402 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.534862  126402 pod_ready.go:92] pod "kube-controller-manager-pause-680144" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:48.534886  126402 pod_ready.go:81] duration metric: took 508.044758ms for pod "kube-controller-manager-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.534897  126402 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jndg6" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.542255  126402 pod_ready.go:92] pod "kube-proxy-jndg6" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:48.542276  126402 pod_ready.go:81] duration metric: took 7.372371ms for pod "kube-proxy-jndg6" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.542287  126402 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.548178  126402 pod_ready.go:92] pod "kube-scheduler-pause-680144" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:48.548198  126402 pod_ready.go:81] duration metric: took 5.903736ms for pod "kube-scheduler-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.548207  126402 pod_ready.go:38] duration metric: took 12.546135403s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:11:48.548229  126402 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:11:48.567062  126402 ops.go:34] apiserver oom_adj: -16
	I0420 01:11:48.567085  126402 kubeadm.go:591] duration metric: took 30.198604506s to restartPrimaryControlPlane
	I0420 01:11:48.567094  126402 kubeadm.go:393] duration metric: took 30.477793878s to StartCluster
	I0420 01:11:48.567119  126402 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:11:48.567172  126402 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:11:48.568634  126402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:11:48.568877  126402 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:11:48.570650  126402 out.go:177] * Verifying Kubernetes components...
	I0420 01:11:48.568945  126402 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:11:48.569111  126402 config.go:182] Loaded profile config "pause-680144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:11:48.572102  126402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:11:48.573779  126402 out.go:177] * Enabled addons: 
	I0420 01:11:45.867693  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .Start
	I0420 01:11:45.867879  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Ensuring networks are active...
	I0420 01:11:45.868775  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Ensuring network default is active
	I0420 01:11:45.869148  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Ensuring network mk-kubernetes-upgrade-345460 is active
	I0420 01:11:45.869673  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Getting domain xml...
	I0420 01:11:45.870370  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Creating domain...
	I0420 01:11:47.191313  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Waiting to get IP...
	I0420 01:11:47.192428  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:11:47.192906  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:11:47.192972  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:11:47.192880  126846 retry.go:31] will retry after 278.355914ms: waiting for machine to come up
	I0420 01:11:47.472383  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:11:47.472947  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:11:47.472977  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:11:47.472892  126846 retry.go:31] will retry after 307.613676ms: waiting for machine to come up
	I0420 01:11:47.782455  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:11:47.783039  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:11:47.783071  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:11:47.782980  126846 retry.go:31] will retry after 409.24697ms: waiting for machine to come up
	I0420 01:11:48.193553  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:11:48.194125  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:11:48.194151  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:11:48.194019  126846 retry.go:31] will retry after 606.575733ms: waiting for machine to come up
	I0420 01:11:48.802269  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:11:48.803033  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:11:48.803069  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:11:48.802883  126846 retry.go:31] will retry after 619.6654ms: waiting for machine to come up
	I0420 01:11:49.424362  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:11:49.424948  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:11:49.425004  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:11:49.424934  126846 retry.go:31] will retry after 807.006627ms: waiting for machine to come up
	I0420 01:11:50.233908  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:11:50.234600  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:11:50.234628  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:11:50.234519  126846 retry.go:31] will retry after 867.385171ms: waiting for machine to come up
	I0420 01:11:48.575269  126402 addons.go:505] duration metric: took 6.321605ms for enable addons: enabled=[]
	I0420 01:11:48.792367  126402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:11:48.813776  126402 node_ready.go:35] waiting up to 6m0s for node "pause-680144" to be "Ready" ...
	I0420 01:11:48.816851  126402 node_ready.go:49] node "pause-680144" has status "Ready":"True"
	I0420 01:11:48.816873  126402 node_ready.go:38] duration metric: took 3.065804ms for node "pause-680144" to be "Ready" ...
	I0420 01:11:48.816885  126402 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:11:48.823145  126402 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n2xqv" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:49.217842  126402 pod_ready.go:92] pod "coredns-7db6d8ff4d-n2xqv" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:49.217870  126402 pod_ready.go:81] duration metric: took 394.700706ms for pod "coredns-7db6d8ff4d-n2xqv" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:49.217883  126402 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:49.619666  126402 pod_ready.go:92] pod "etcd-pause-680144" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:49.619693  126402 pod_ready.go:81] duration metric: took 401.801603ms for pod "etcd-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:49.619706  126402 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:50.019286  126402 pod_ready.go:92] pod "kube-apiserver-pause-680144" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:50.019313  126402 pod_ready.go:81] duration metric: took 399.599415ms for pod "kube-apiserver-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:50.019323  126402 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:50.419722  126402 pod_ready.go:92] pod "kube-controller-manager-pause-680144" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:50.419745  126402 pod_ready.go:81] duration metric: took 400.415528ms for pod "kube-controller-manager-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:50.419755  126402 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jndg6" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:50.818025  126402 pod_ready.go:92] pod "kube-proxy-jndg6" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:50.818049  126402 pod_ready.go:81] duration metric: took 398.287987ms for pod "kube-proxy-jndg6" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:50.818058  126402 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:51.218062  126402 pod_ready.go:92] pod "kube-scheduler-pause-680144" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:51.218088  126402 pod_ready.go:81] duration metric: took 400.021355ms for pod "kube-scheduler-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:51.218097  126402 pod_ready.go:38] duration metric: took 2.401199295s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:11:51.218113  126402 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:11:51.218160  126402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:11:51.235074  126402 api_server.go:72] duration metric: took 2.666165937s to wait for apiserver process to appear ...
	I0420 01:11:51.235098  126402 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:11:51.235120  126402 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I0420 01:11:51.240233  126402 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I0420 01:11:51.241328  126402 api_server.go:141] control plane version: v1.30.0
	I0420 01:11:51.241355  126402 api_server.go:131] duration metric: took 6.249294ms to wait for apiserver health ...
	I0420 01:11:51.241365  126402 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:11:51.421764  126402 system_pods.go:59] 6 kube-system pods found
	I0420 01:11:51.421792  126402 system_pods.go:61] "coredns-7db6d8ff4d-n2xqv" [35b4f9fc-975d-4d43-ade9-2818e1771e07] Running
	I0420 01:11:51.421802  126402 system_pods.go:61] "etcd-pause-680144" [efd2b2a8-8eda-418a-99d1-19d4090fbdca] Running
	I0420 01:11:51.421808  126402 system_pods.go:61] "kube-apiserver-pause-680144" [dacc3996-946e-4c5b-b960-efc8148d4a2d] Running
	I0420 01:11:51.421813  126402 system_pods.go:61] "kube-controller-manager-pause-680144" [90c81c0e-75d3-441d-b6b4-19a28ad116cc] Running
	I0420 01:11:51.421823  126402 system_pods.go:61] "kube-proxy-jndg6" [45a3f948-63b9-45b8-961e-b1b573c2b862] Running
	I0420 01:11:51.421827  126402 system_pods.go:61] "kube-scheduler-pause-680144" [e4284add-d032-4f00-a04e-059fcc279f4e] Running
	I0420 01:11:51.421836  126402 system_pods.go:74] duration metric: took 180.463976ms to wait for pod list to return data ...
	I0420 01:11:51.421850  126402 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:11:51.619486  126402 default_sa.go:45] found service account: "default"
	I0420 01:11:51.619516  126402 default_sa.go:55] duration metric: took 197.654132ms for default service account to be created ...
	I0420 01:11:51.619527  126402 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:11:51.823334  126402 system_pods.go:86] 6 kube-system pods found
	I0420 01:11:51.823374  126402 system_pods.go:89] "coredns-7db6d8ff4d-n2xqv" [35b4f9fc-975d-4d43-ade9-2818e1771e07] Running
	I0420 01:11:51.823384  126402 system_pods.go:89] "etcd-pause-680144" [efd2b2a8-8eda-418a-99d1-19d4090fbdca] Running
	I0420 01:11:51.823401  126402 system_pods.go:89] "kube-apiserver-pause-680144" [dacc3996-946e-4c5b-b960-efc8148d4a2d] Running
	I0420 01:11:51.823410  126402 system_pods.go:89] "kube-controller-manager-pause-680144" [90c81c0e-75d3-441d-b6b4-19a28ad116cc] Running
	I0420 01:11:51.823425  126402 system_pods.go:89] "kube-proxy-jndg6" [45a3f948-63b9-45b8-961e-b1b573c2b862] Running
	I0420 01:11:51.823439  126402 system_pods.go:89] "kube-scheduler-pause-680144" [e4284add-d032-4f00-a04e-059fcc279f4e] Running
	I0420 01:11:51.823450  126402 system_pods.go:126] duration metric: took 203.914771ms to wait for k8s-apps to be running ...
	I0420 01:11:51.823470  126402 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:11:51.823536  126402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:11:51.849267  126402 system_svc.go:56] duration metric: took 25.790354ms WaitForService to wait for kubelet
	I0420 01:11:51.849293  126402 kubeadm.go:576] duration metric: took 3.280387005s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:11:51.849324  126402 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:11:52.018536  126402 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:11:52.018568  126402 node_conditions.go:123] node cpu capacity is 2
	I0420 01:11:52.018583  126402 node_conditions.go:105] duration metric: took 169.242351ms to run NodePressure ...
	I0420 01:11:52.018602  126402 start.go:240] waiting for startup goroutines ...
	I0420 01:11:52.018620  126402 start.go:245] waiting for cluster config update ...
	I0420 01:11:52.018634  126402 start.go:254] writing updated cluster config ...
	I0420 01:11:52.018939  126402 ssh_runner.go:195] Run: rm -f paused
	I0420 01:11:52.078955  126402 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:11:52.081819  126402 out.go:177] * Done! kubectl is now configured to use "pause-680144" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.871577290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713575512871552849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1024b9c7-dd29-4f6e-8f57-f7377358c215 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.872416107Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=900ce21a-437a-4d66-9423-c3d4affd1770 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.872518551Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=900ce21a-437a-4d66-9423-c3d4affd1770 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.872824642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7079c73478b7fdcc2ac6783b2f1823f580d6c983713f09e3474517f167559b1,PodSandboxId:e8cb5da210123818ee46fa4cdf87fbfd8c59c35cf82c76e73d4f7fc3ac07ae40,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713575491146851437,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 610fc688abee6d1434e7d2e556fad82d,},Annotations:map[string]string{io.kubernetes.container.hash: fb3a49a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58f6fedf5a80c775d4a0d66196e9e813c9f09c14c904c68a15172607ecc890d0,PodSandboxId:70a600126c5a247b13bd7e172de1a47d0677fa1eee824a0f6eddc5e9ac1e8ef2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713575491162448940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0ab0576b25686ea1d2dcabab1c014,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c171416879a3f9cd8e9cdba12d01d577e522a14cde4e31e0d4b64e7b8d4a553,PodSandboxId:46a3b94b2804ca83ada0a679eaf55e4c5e36dd3f8fa39d3dac4544cccd5fd5fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713575491143740578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7753d065e8b91b151adc80443b939d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b091fcd81d20aa572548f07f59b0e8e0a54d0e3fa4b0f484b9324efb57918d,PodSandboxId:e47f531de3f237207159f867ef3534f029825c2c8d651957711647d4e13fda3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713575491130451085,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 794c4a62a0b10913407c7946e3fa7672,},Annotations:map[string]string{io.kubernetes.container.hash: 603c03d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229e3f712b81762e37afbc46833638c0675d1021f616c0247bebce064605fdf2,PodSandboxId:dc4a949cfd3a4b4f906ef02e1bac52af01e00a4001ded750ff0363ac906f1e6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713575478299552441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2xqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b4f9fc-975d-4d43-ade9-2818e1771e07,},Annotations:map[string]string{io.kubernetes.container.hash: 3a494162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78e339dfe36773be10f8568132cffb783135d3f41e884a520c3c234fe4fc8e6,PodSandboxId:e7b87a2b6b167fde0f91587d57d2431040a62f4856bc1a1349fedcd77278ce7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713575477557417087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jndg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a3f948-63b9-45b8-961e-b1b573c2b862,},Annotations:map[string]string{io
.kubernetes.container.hash: e3004a64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce651de3c308355c6e3bc9b00a084e29c5ec8675ac179ab77f2039a1ce31980,PodSandboxId:70a600126c5a247b13bd7e172de1a47d0677fa1eee824a0f6eddc5e9ac1e8ef2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713575477462029965,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0ab0576b25686ea1d2dcabab1c014,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5a96f57a7369e2f12aeed9290333c2de48d98ad0fa87296ed60d6d23892d5b,PodSandboxId:e8cb5da210123818ee46fa4cdf87fbfd8c59c35cf82c76e73d4f7fc3ac07ae40,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713575477365903107,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 610fc688abee6d1434e7d2e556fad82d,},Annotations:map[string]string{io.kubernetes.container.hash: fb3a49a8,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10d8cec2caed1534aac8ce051bbc0f03ecaab1265f0ac3e1d754dc8d96061af,PodSandboxId:e47f531de3f237207159f867ef3534f029825c2c8d651957711647d4e13fda3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713575477314982781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 794c4a62a0b10913407c7946e3fa7672,},Annotations:map[string]string{io.kubernetes.container.hash: 603c03d9,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12f76e07e9fd0a694df3fa853e27f3c0ebf6d407cfbb7593d46be1cbd277cb2c,PodSandboxId:46a3b94b2804ca83ada0a679eaf55e4c5e36dd3f8fa39d3dac4544cccd5fd5fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713575477270781742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7753d065e8b91b151adc80443b939d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59845b36c60dcd90fbf451ae7f951c66655e974a6754e09549e907c0f0209176,PodSandboxId:da19d23d434d691293af78b242fe629130fd93d712a21dfb15e5f041331c1f1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713575419634736441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jndg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a3f948-63b9-45b8-961e-b1b573c2b862,},Annotations:map[string]string{io.kubernetes.container.hash: e3004a64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e88e57cc3c7b1a143078c19ab3ff4bb0fc4c078aeb8551cba7d13e089e4e2de,PodSandboxId:b8947ae0e6d919c8660da9dda59e5d231f54b09b47c8c960bdfd4e1ac2124e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713575419723517300,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2xqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b4f9fc-975d-4d43-ade9-2818e1771e07,},Annotations:map[string]string{io.kubernetes.container.hash: 3a494162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=900ce21a-437a-4d66-9423-c3d4affd1770 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.932092802Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=89286aa0-a6d0-4933-88ff-5ee4f4c286b6 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.932181930Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=89286aa0-a6d0-4933-88ff-5ee4f4c286b6 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.934489929Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b5fb2dee-63a3-4b53-a33c-ad09f2d9e906 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.935155187Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713575512934917140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5fb2dee-63a3-4b53-a33c-ad09f2d9e906 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.935929450Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21163951-7a48-4b67-ae17-7396c3119ac5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.936021176Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21163951-7a48-4b67-ae17-7396c3119ac5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.936515162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7079c73478b7fdcc2ac6783b2f1823f580d6c983713f09e3474517f167559b1,PodSandboxId:e8cb5da210123818ee46fa4cdf87fbfd8c59c35cf82c76e73d4f7fc3ac07ae40,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713575491146851437,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 610fc688abee6d1434e7d2e556fad82d,},Annotations:map[string]string{io.kubernetes.container.hash: fb3a49a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58f6fedf5a80c775d4a0d66196e9e813c9f09c14c904c68a15172607ecc890d0,PodSandboxId:70a600126c5a247b13bd7e172de1a47d0677fa1eee824a0f6eddc5e9ac1e8ef2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713575491162448940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0ab0576b25686ea1d2dcabab1c014,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c171416879a3f9cd8e9cdba12d01d577e522a14cde4e31e0d4b64e7b8d4a553,PodSandboxId:46a3b94b2804ca83ada0a679eaf55e4c5e36dd3f8fa39d3dac4544cccd5fd5fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713575491143740578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7753d065e8b91b151adc80443b939d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b091fcd81d20aa572548f07f59b0e8e0a54d0e3fa4b0f484b9324efb57918d,PodSandboxId:e47f531de3f237207159f867ef3534f029825c2c8d651957711647d4e13fda3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713575491130451085,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 794c4a62a0b10913407c7946e3fa7672,},Annotations:map[string]string{io.kubernetes.container.hash: 603c03d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229e3f712b81762e37afbc46833638c0675d1021f616c0247bebce064605fdf2,PodSandboxId:dc4a949cfd3a4b4f906ef02e1bac52af01e00a4001ded750ff0363ac906f1e6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713575478299552441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2xqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b4f9fc-975d-4d43-ade9-2818e1771e07,},Annotations:map[string]string{io.kubernetes.container.hash: 3a494162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78e339dfe36773be10f8568132cffb783135d3f41e884a520c3c234fe4fc8e6,PodSandboxId:e7b87a2b6b167fde0f91587d57d2431040a62f4856bc1a1349fedcd77278ce7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713575477557417087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jndg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a3f948-63b9-45b8-961e-b1b573c2b862,},Annotations:map[string]string{io
.kubernetes.container.hash: e3004a64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce651de3c308355c6e3bc9b00a084e29c5ec8675ac179ab77f2039a1ce31980,PodSandboxId:70a600126c5a247b13bd7e172de1a47d0677fa1eee824a0f6eddc5e9ac1e8ef2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713575477462029965,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0ab0576b25686ea1d2dcabab1c014,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5a96f57a7369e2f12aeed9290333c2de48d98ad0fa87296ed60d6d23892d5b,PodSandboxId:e8cb5da210123818ee46fa4cdf87fbfd8c59c35cf82c76e73d4f7fc3ac07ae40,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713575477365903107,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 610fc688abee6d1434e7d2e556fad82d,},Annotations:map[string]string{io.kubernetes.container.hash: fb3a49a8,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10d8cec2caed1534aac8ce051bbc0f03ecaab1265f0ac3e1d754dc8d96061af,PodSandboxId:e47f531de3f237207159f867ef3534f029825c2c8d651957711647d4e13fda3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713575477314982781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 794c4a62a0b10913407c7946e3fa7672,},Annotations:map[string]string{io.kubernetes.container.hash: 603c03d9,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12f76e07e9fd0a694df3fa853e27f3c0ebf6d407cfbb7593d46be1cbd277cb2c,PodSandboxId:46a3b94b2804ca83ada0a679eaf55e4c5e36dd3f8fa39d3dac4544cccd5fd5fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713575477270781742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7753d065e8b91b151adc80443b939d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59845b36c60dcd90fbf451ae7f951c66655e974a6754e09549e907c0f0209176,PodSandboxId:da19d23d434d691293af78b242fe629130fd93d712a21dfb15e5f041331c1f1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713575419634736441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jndg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a3f948-63b9-45b8-961e-b1b573c2b862,},Annotations:map[string]string{io.kubernetes.container.hash: e3004a64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e88e57cc3c7b1a143078c19ab3ff4bb0fc4c078aeb8551cba7d13e089e4e2de,PodSandboxId:b8947ae0e6d919c8660da9dda59e5d231f54b09b47c8c960bdfd4e1ac2124e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713575419723517300,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2xqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b4f9fc-975d-4d43-ade9-2818e1771e07,},Annotations:map[string]string{io.kubernetes.container.hash: 3a494162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21163951-7a48-4b67-ae17-7396c3119ac5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.982989839Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3af936fa-4868-42da-a58a-831a3abaaa7d name=/runtime.v1.RuntimeService/Version
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.983088337Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3af936fa-4868-42da-a58a-831a3abaaa7d name=/runtime.v1.RuntimeService/Version
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.984731312Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=254289df-92d7-4d4d-a42f-f3ef0abf4b35 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.985269535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713575512985179642,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=254289df-92d7-4d4d-a42f-f3ef0abf4b35 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.985818489Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f926c483-5abe-4a82-8b28-82cdd506965d name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.985923233Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f926c483-5abe-4a82-8b28-82cdd506965d name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:52 pause-680144 crio[2441]: time="2024-04-20 01:11:52.986174090Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7079c73478b7fdcc2ac6783b2f1823f580d6c983713f09e3474517f167559b1,PodSandboxId:e8cb5da210123818ee46fa4cdf87fbfd8c59c35cf82c76e73d4f7fc3ac07ae40,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713575491146851437,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 610fc688abee6d1434e7d2e556fad82d,},Annotations:map[string]string{io.kubernetes.container.hash: fb3a49a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58f6fedf5a80c775d4a0d66196e9e813c9f09c14c904c68a15172607ecc890d0,PodSandboxId:70a600126c5a247b13bd7e172de1a47d0677fa1eee824a0f6eddc5e9ac1e8ef2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713575491162448940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0ab0576b25686ea1d2dcabab1c014,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c171416879a3f9cd8e9cdba12d01d577e522a14cde4e31e0d4b64e7b8d4a553,PodSandboxId:46a3b94b2804ca83ada0a679eaf55e4c5e36dd3f8fa39d3dac4544cccd5fd5fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713575491143740578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7753d065e8b91b151adc80443b939d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b091fcd81d20aa572548f07f59b0e8e0a54d0e3fa4b0f484b9324efb57918d,PodSandboxId:e47f531de3f237207159f867ef3534f029825c2c8d651957711647d4e13fda3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713575491130451085,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 794c4a62a0b10913407c7946e3fa7672,},Annotations:map[string]string{io.kubernetes.container.hash: 603c03d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229e3f712b81762e37afbc46833638c0675d1021f616c0247bebce064605fdf2,PodSandboxId:dc4a949cfd3a4b4f906ef02e1bac52af01e00a4001ded750ff0363ac906f1e6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713575478299552441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2xqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b4f9fc-975d-4d43-ade9-2818e1771e07,},Annotations:map[string]string{io.kubernetes.container.hash: 3a494162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78e339dfe36773be10f8568132cffb783135d3f41e884a520c3c234fe4fc8e6,PodSandboxId:e7b87a2b6b167fde0f91587d57d2431040a62f4856bc1a1349fedcd77278ce7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713575477557417087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jndg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a3f948-63b9-45b8-961e-b1b573c2b862,},Annotations:map[string]string{io
.kubernetes.container.hash: e3004a64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce651de3c308355c6e3bc9b00a084e29c5ec8675ac179ab77f2039a1ce31980,PodSandboxId:70a600126c5a247b13bd7e172de1a47d0677fa1eee824a0f6eddc5e9ac1e8ef2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713575477462029965,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0ab0576b25686ea1d2dcabab1c014,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5a96f57a7369e2f12aeed9290333c2de48d98ad0fa87296ed60d6d23892d5b,PodSandboxId:e8cb5da210123818ee46fa4cdf87fbfd8c59c35cf82c76e73d4f7fc3ac07ae40,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713575477365903107,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 610fc688abee6d1434e7d2e556fad82d,},Annotations:map[string]string{io.kubernetes.container.hash: fb3a49a8,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10d8cec2caed1534aac8ce051bbc0f03ecaab1265f0ac3e1d754dc8d96061af,PodSandboxId:e47f531de3f237207159f867ef3534f029825c2c8d651957711647d4e13fda3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713575477314982781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 794c4a62a0b10913407c7946e3fa7672,},Annotations:map[string]string{io.kubernetes.container.hash: 603c03d9,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12f76e07e9fd0a694df3fa853e27f3c0ebf6d407cfbb7593d46be1cbd277cb2c,PodSandboxId:46a3b94b2804ca83ada0a679eaf55e4c5e36dd3f8fa39d3dac4544cccd5fd5fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713575477270781742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7753d065e8b91b151adc80443b939d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59845b36c60dcd90fbf451ae7f951c66655e974a6754e09549e907c0f0209176,PodSandboxId:da19d23d434d691293af78b242fe629130fd93d712a21dfb15e5f041331c1f1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713575419634736441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jndg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a3f948-63b9-45b8-961e-b1b573c2b862,},Annotations:map[string]string{io.kubernetes.container.hash: e3004a64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e88e57cc3c7b1a143078c19ab3ff4bb0fc4c078aeb8551cba7d13e089e4e2de,PodSandboxId:b8947ae0e6d919c8660da9dda59e5d231f54b09b47c8c960bdfd4e1ac2124e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713575419723517300,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2xqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b4f9fc-975d-4d43-ade9-2818e1771e07,},Annotations:map[string]string{io.kubernetes.container.hash: 3a494162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f926c483-5abe-4a82-8b28-82cdd506965d name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:53 pause-680144 crio[2441]: time="2024-04-20 01:11:53.033518758Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=206f9c8a-9a4b-4d07-bd67-5a75c7be8849 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:11:53 pause-680144 crio[2441]: time="2024-04-20 01:11:53.033610626Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=206f9c8a-9a4b-4d07-bd67-5a75c7be8849 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:11:53 pause-680144 crio[2441]: time="2024-04-20 01:11:53.035755673Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9314c62-f067-4fc0-8a30-0ee20ce8a4e7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:11:53 pause-680144 crio[2441]: time="2024-04-20 01:11:53.036371142Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713575513036342683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9314c62-f067-4fc0-8a30-0ee20ce8a4e7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:11:53 pause-680144 crio[2441]: time="2024-04-20 01:11:53.037566621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d669b2d-e0cf-4ae9-9005-abb326b6c4fc name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:53 pause-680144 crio[2441]: time="2024-04-20 01:11:53.037641330Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d669b2d-e0cf-4ae9-9005-abb326b6c4fc name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:53 pause-680144 crio[2441]: time="2024-04-20 01:11:53.038067417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7079c73478b7fdcc2ac6783b2f1823f580d6c983713f09e3474517f167559b1,PodSandboxId:e8cb5da210123818ee46fa4cdf87fbfd8c59c35cf82c76e73d4f7fc3ac07ae40,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713575491146851437,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 610fc688abee6d1434e7d2e556fad82d,},Annotations:map[string]string{io.kubernetes.container.hash: fb3a49a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58f6fedf5a80c775d4a0d66196e9e813c9f09c14c904c68a15172607ecc890d0,PodSandboxId:70a600126c5a247b13bd7e172de1a47d0677fa1eee824a0f6eddc5e9ac1e8ef2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713575491162448940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0ab0576b25686ea1d2dcabab1c014,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c171416879a3f9cd8e9cdba12d01d577e522a14cde4e31e0d4b64e7b8d4a553,PodSandboxId:46a3b94b2804ca83ada0a679eaf55e4c5e36dd3f8fa39d3dac4544cccd5fd5fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713575491143740578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7753d065e8b91b151adc80443b939d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b091fcd81d20aa572548f07f59b0e8e0a54d0e3fa4b0f484b9324efb57918d,PodSandboxId:e47f531de3f237207159f867ef3534f029825c2c8d651957711647d4e13fda3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713575491130451085,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 794c4a62a0b10913407c7946e3fa7672,},Annotations:map[string]string{io.kubernetes.container.hash: 603c03d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229e3f712b81762e37afbc46833638c0675d1021f616c0247bebce064605fdf2,PodSandboxId:dc4a949cfd3a4b4f906ef02e1bac52af01e00a4001ded750ff0363ac906f1e6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713575478299552441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2xqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b4f9fc-975d-4d43-ade9-2818e1771e07,},Annotations:map[string]string{io.kubernetes.container.hash: 3a494162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78e339dfe36773be10f8568132cffb783135d3f41e884a520c3c234fe4fc8e6,PodSandboxId:e7b87a2b6b167fde0f91587d57d2431040a62f4856bc1a1349fedcd77278ce7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713575477557417087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jndg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a3f948-63b9-45b8-961e-b1b573c2b862,},Annotations:map[string]string{io
.kubernetes.container.hash: e3004a64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce651de3c308355c6e3bc9b00a084e29c5ec8675ac179ab77f2039a1ce31980,PodSandboxId:70a600126c5a247b13bd7e172de1a47d0677fa1eee824a0f6eddc5e9ac1e8ef2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713575477462029965,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0ab0576b25686ea1d2dcabab1c014,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5a96f57a7369e2f12aeed9290333c2de48d98ad0fa87296ed60d6d23892d5b,PodSandboxId:e8cb5da210123818ee46fa4cdf87fbfd8c59c35cf82c76e73d4f7fc3ac07ae40,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713575477365903107,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 610fc688abee6d1434e7d2e556fad82d,},Annotations:map[string]string{io.kubernetes.container.hash: fb3a49a8,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10d8cec2caed1534aac8ce051bbc0f03ecaab1265f0ac3e1d754dc8d96061af,PodSandboxId:e47f531de3f237207159f867ef3534f029825c2c8d651957711647d4e13fda3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713575477314982781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 794c4a62a0b10913407c7946e3fa7672,},Annotations:map[string]string{io.kubernetes.container.hash: 603c03d9,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12f76e07e9fd0a694df3fa853e27f3c0ebf6d407cfbb7593d46be1cbd277cb2c,PodSandboxId:46a3b94b2804ca83ada0a679eaf55e4c5e36dd3f8fa39d3dac4544cccd5fd5fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713575477270781742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7753d065e8b91b151adc80443b939d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59845b36c60dcd90fbf451ae7f951c66655e974a6754e09549e907c0f0209176,PodSandboxId:da19d23d434d691293af78b242fe629130fd93d712a21dfb15e5f041331c1f1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713575419634736441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jndg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a3f948-63b9-45b8-961e-b1b573c2b862,},Annotations:map[string]string{io.kubernetes.container.hash: e3004a64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e88e57cc3c7b1a143078c19ab3ff4bb0fc4c078aeb8551cba7d13e089e4e2de,PodSandboxId:b8947ae0e6d919c8660da9dda59e5d231f54b09b47c8c960bdfd4e1ac2124e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713575419723517300,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2xqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b4f9fc-975d-4d43-ade9-2818e1771e07,},Annotations:map[string]string{io.kubernetes.container.hash: 3a494162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d669b2d-e0cf-4ae9-9005-abb326b6c4fc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	58f6fedf5a80c       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   21 seconds ago       Running             kube-scheduler            2                   70a600126c5a2       kube-scheduler-pause-680144
	a7079c73478b7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   21 seconds ago       Running             etcd                      2                   e8cb5da210123       etcd-pause-680144
	3c171416879a3       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   21 seconds ago       Running             kube-controller-manager   2                   46a3b94b2804c       kube-controller-manager-pause-680144
	57b091fcd81d2       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   22 seconds ago       Running             kube-apiserver            2                   e47f531de3f23       kube-apiserver-pause-680144
	229e3f712b817       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   34 seconds ago       Running             coredns                   1                   dc4a949cfd3a4       coredns-7db6d8ff4d-n2xqv
	c78e339dfe367       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   35 seconds ago       Running             kube-proxy                1                   e7b87a2b6b167       kube-proxy-jndg6
	2ce651de3c308       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   35 seconds ago       Exited              kube-scheduler            1                   70a600126c5a2       kube-scheduler-pause-680144
	2d5a96f57a736       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   35 seconds ago       Exited              etcd                      1                   e8cb5da210123       etcd-pause-680144
	b10d8cec2caed       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   35 seconds ago       Exited              kube-apiserver            1                   e47f531de3f23       kube-apiserver-pause-680144
	12f76e07e9fd0       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   35 seconds ago       Exited              kube-controller-manager   1                   46a3b94b2804c       kube-controller-manager-pause-680144
	1e88e57cc3c7b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   b8947ae0e6d91       coredns-7db6d8ff4d-n2xqv
	59845b36c60dc       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   About a minute ago   Exited              kube-proxy                0                   da19d23d434d6       kube-proxy-jndg6
	
	
	==> coredns [1e88e57cc3c7b1a143078c19ab3ff4bb0fc4c078aeb8551cba7d13e089e4e2de] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1232768171]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:10:20.216) (total time: 30003ms):
	Trace[1232768171]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (01:10:50.219)
	Trace[1232768171]: [30.003127668s] [30.003127668s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[51788946]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:10:20.216) (total time: 30003ms):
	Trace[51788946]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (01:10:50.220)
	Trace[51788946]: [30.003655733s] [30.003655733s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[621536973]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:10:20.219) (total time: 30002ms):
	Trace[621536973]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (01:10:50.221)
	Trace[621536973]: [30.002519428s] [30.002519428s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:33592 - 49771 "HINFO IN 2256637506804474767.8792056235808915989. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009355939s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [229e3f712b81762e37afbc46833638c0675d1021f616c0247bebce064605fdf2] <==
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43317 - 10702 "HINFO IN 6865145529338662859.4907811133070578253. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011610026s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[2065321125]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:11:18.882) (total time: 10001ms):
	Trace[2065321125]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (01:11:28.883)
	Trace[2065321125]: [10.00111154s] [10.00111154s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[54349611]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:11:18.882) (total time: 10001ms):
	Trace[54349611]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (01:11:28.884)
	Trace[54349611]: [10.001621347s] [10.001621347s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1017594132]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:11:18.881) (total time: 10002ms):
	Trace[1017594132]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (01:11:28.884)
	Trace[1017594132]: [10.002512237s] [10.002512237s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-680144
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-680144
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=pause-680144
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T01_10_06_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 01:10:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-680144
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:11:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 01:11:34 +0000   Sat, 20 Apr 2024 01:10:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 01:11:34 +0000   Sat, 20 Apr 2024 01:10:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 01:11:34 +0000   Sat, 20 Apr 2024 01:10:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 01:11:34 +0000   Sat, 20 Apr 2024 01:10:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.180
	  Hostname:    pause-680144
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f85af7a15d24afaa34d3485bf416ee0
	  System UUID:                9f85af7a-15d2-4afa-a34d-3485bf416ee0
	  Boot ID:                    67e5bc76-3f7d-4666-885d-621a6b4231c3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-n2xqv                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     95s
	  kube-system                 etcd-pause-680144                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         108s
	  kube-system                 kube-apiserver-pause-680144             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-controller-manager-pause-680144    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-proxy-jndg6                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-scheduler-pause-680144             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 93s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientPID     108s               kubelet          Node pause-680144 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  108s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  108s               kubelet          Node pause-680144 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s               kubelet          Node pause-680144 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 108s               kubelet          Starting kubelet.
	  Normal  NodeReady                107s               kubelet          Node pause-680144 status is now: NodeReady
	  Normal  RegisteredNode           96s                node-controller  Node pause-680144 event: Registered Node pause-680144 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-680144 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-680144 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-680144 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-680144 event: Registered Node pause-680144 in Controller
	
	
	==> dmesg <==
	[  +0.062956] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073117] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.159699] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.164963] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.329775] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +5.065680] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.066439] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.231807] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.771077] kauditd_printk_skb: 54 callbacks suppressed
	[Apr20 01:10] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.088040] kauditd_printk_skb: 33 callbacks suppressed
	[ +12.915515] systemd-fstab-generator[1491]: Ignoring "noauto" option for root device
	[  +0.134208] kauditd_printk_skb: 21 callbacks suppressed
	[ +40.613342] kauditd_printk_skb: 96 callbacks suppressed
	[Apr20 01:11] systemd-fstab-generator[2359]: Ignoring "noauto" option for root device
	[  +0.147752] systemd-fstab-generator[2371]: Ignoring "noauto" option for root device
	[  +0.188827] systemd-fstab-generator[2385]: Ignoring "noauto" option for root device
	[  +0.153505] systemd-fstab-generator[2397]: Ignoring "noauto" option for root device
	[  +0.329143] systemd-fstab-generator[2425]: Ignoring "noauto" option for root device
	[  +7.517481] systemd-fstab-generator[2552]: Ignoring "noauto" option for root device
	[  +0.086001] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.485673] kauditd_printk_skb: 87 callbacks suppressed
	[  +1.257561] systemd-fstab-generator[3277]: Ignoring "noauto" option for root device
	[  +4.378918] kauditd_printk_skb: 38 callbacks suppressed
	[ +13.935605] systemd-fstab-generator[3629]: Ignoring "noauto" option for root device
	
	
	==> etcd [2d5a96f57a7369e2f12aeed9290333c2de48d98ad0fa87296ed60d6d23892d5b] <==
	{"level":"info","ts":"2024-04-20T01:11:18.19214Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"46.600899ms"}
	{"level":"info","ts":"2024-04-20T01:11:18.289505Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-20T01:11:18.303404Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"1bb44bc72743d07d","local-member-id":"a1d4aad7c74b318","commit-index":422}
	{"level":"info","ts":"2024-04-20T01:11:18.303541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-20T01:11:18.303596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became follower at term 2"}
	{"level":"info","ts":"2024-04-20T01:11:18.30361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft a1d4aad7c74b318 [peers: [], term: 2, commit: 422, applied: 0, lastindex: 422, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-20T01:11:18.328555Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-20T01:11:18.371357Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":404}
	{"level":"info","ts":"2024-04-20T01:11:18.385729Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-20T01:11:18.396934Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"a1d4aad7c74b318","timeout":"7s"}
	{"level":"info","ts":"2024-04-20T01:11:18.397166Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"a1d4aad7c74b318"}
	{"level":"info","ts":"2024-04-20T01:11:18.397264Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"a1d4aad7c74b318","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-20T01:11:18.397504Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-20T01:11:18.397617Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T01:11:18.397645Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T01:11:18.397659Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T01:11:18.433853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 switched to configuration voters=(728820823681708824)"}
	{"level":"info","ts":"2024-04-20T01:11:18.43392Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1bb44bc72743d07d","local-member-id":"a1d4aad7c74b318","added-peer-id":"a1d4aad7c74b318","added-peer-peer-urls":["https://192.168.72.180:2380"]}
	{"level":"info","ts":"2024-04-20T01:11:18.434025Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1bb44bc72743d07d","local-member-id":"a1d4aad7c74b318","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:11:18.434052Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:11:18.480995Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-20T01:11:18.482608Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.180:2380"}
	{"level":"info","ts":"2024-04-20T01:11:18.487703Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.180:2380"}
	{"level":"info","ts":"2024-04-20T01:11:18.487929Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a1d4aad7c74b318","initial-advertise-peer-urls":["https://192.168.72.180:2380"],"listen-peer-urls":["https://192.168.72.180:2380"],"advertise-client-urls":["https://192.168.72.180:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.180:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-20T01:11:18.487986Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [a7079c73478b7fdcc2ac6783b2f1823f580d6c983713f09e3474517f167559b1] <==
	{"level":"info","ts":"2024-04-20T01:11:31.581905Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1bb44bc72743d07d","local-member-id":"a1d4aad7c74b318","added-peer-id":"a1d4aad7c74b318","added-peer-peer-urls":["https://192.168.72.180:2380"]}
	{"level":"info","ts":"2024-04-20T01:11:31.582019Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1bb44bc72743d07d","local-member-id":"a1d4aad7c74b318","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:11:31.582066Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:11:31.586592Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-20T01:11:31.586712Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.180:2380"}
	{"level":"info","ts":"2024-04-20T01:11:31.58689Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.180:2380"}
	{"level":"info","ts":"2024-04-20T01:11:31.588626Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-20T01:11:31.588555Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a1d4aad7c74b318","initial-advertise-peer-urls":["https://192.168.72.180:2380"],"listen-peer-urls":["https://192.168.72.180:2380"],"advertise-client-urls":["https://192.168.72.180:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.180:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-20T01:11:32.767729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-20T01:11:32.767815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-20T01:11:32.767847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 received MsgPreVoteResp from a1d4aad7c74b318 at term 2"}
	{"level":"info","ts":"2024-04-20T01:11:32.767874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became candidate at term 3"}
	{"level":"info","ts":"2024-04-20T01:11:32.76788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 received MsgVoteResp from a1d4aad7c74b318 at term 3"}
	{"level":"info","ts":"2024-04-20T01:11:32.767888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became leader at term 3"}
	{"level":"info","ts":"2024-04-20T01:11:32.767897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a1d4aad7c74b318 elected leader a1d4aad7c74b318 at term 3"}
	{"level":"info","ts":"2024-04-20T01:11:32.775105Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a1d4aad7c74b318","local-member-attributes":"{Name:pause-680144 ClientURLs:[https://192.168.72.180:2379]}","request-path":"/0/members/a1d4aad7c74b318/attributes","cluster-id":"1bb44bc72743d07d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-20T01:11:32.775163Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:11:32.775709Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:11:32.775818Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-20T01:11:32.775881Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T01:11:32.777532Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-20T01:11:32.779532Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.180:2379"}
	{"level":"warn","ts":"2024-04-20T01:11:35.085072Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.813872ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12905221932579286935 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-jndg6\" mod_revision:373 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-jndg6\" value_size:4638 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-jndg6\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-20T01:11:35.085185Z","caller":"traceutil/trace.go:171","msg":"trace[1398383649] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"311.494929ms","start":"2024-04-20T01:11:34.773673Z","end":"2024-04-20T01:11:35.085168Z","steps":["trace[1398383649] 'process raft request'  (duration: 156.98102ms)","trace[1398383649] 'compare'  (duration: 153.702608ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-20T01:11:35.085308Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:11:34.773659Z","time spent":"311.619097ms","remote":"127.0.0.1:43562","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4689,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-jndg6\" mod_revision:373 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-jndg6\" value_size:4638 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-jndg6\" > >"}
	
	
	==> kernel <==
	 01:11:53 up 2 min,  0 users,  load average: 0.40, 0.26, 0.10
	Linux pause-680144 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [57b091fcd81d20aa572548f07f59b0e8e0a54d0e3fa4b0f484b9324efb57918d] <==
	I0420 01:11:34.246724       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0420 01:11:34.355872       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0420 01:11:34.356157       1 policy_source.go:224] refreshing policies
	I0420 01:11:34.370180       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0420 01:11:34.377618       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0420 01:11:34.387749       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0420 01:11:34.388865       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0420 01:11:34.388923       1 shared_informer.go:320] Caches are synced for configmaps
	I0420 01:11:34.388970       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0420 01:11:34.388976       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0420 01:11:34.400858       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0420 01:11:34.401412       1 aggregator.go:165] initial CRD sync complete...
	I0420 01:11:34.401597       1 autoregister_controller.go:141] Starting autoregister controller
	I0420 01:11:34.401640       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0420 01:11:34.401669       1 cache.go:39] Caches are synced for autoregister controller
	I0420 01:11:34.423594       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0420 01:11:34.449353       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0420 01:11:35.183282       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0420 01:11:35.827738       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0420 01:11:35.843012       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0420 01:11:35.879810       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0420 01:11:35.921544       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0420 01:11:35.932998       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0420 01:11:46.703003       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0420 01:11:46.980128       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [b10d8cec2caed1534aac8ce051bbc0f03ecaab1265f0ac3e1d754dc8d96061af] <==
	I0420 01:11:17.922949       1 options.go:221] external host was not specified, using 192.168.72.180
	I0420 01:11:17.924304       1 server.go:148] Version: v1.30.0
	I0420 01:11:17.924360       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:11:18.822418       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0420 01:11:18.824915       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0420 01:11:18.825442       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0420 01:11:18.825608       1 instance.go:299] Using reconciler: lease
	I0420 01:11:18.824940       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0420 01:11:18.827450       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:18.827540       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:18.827626       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:19.828820       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:19.828921       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:19.829106       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:21.127387       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:21.372633       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:21.715962       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:24.042038       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:24.060482       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:24.158542       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:27.790934       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:28.144956       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:28.672076       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [12f76e07e9fd0a694df3fa853e27f3c0ebf6d407cfbb7593d46be1cbd277cb2c] <==
	I0420 01:11:18.931911       1 serving.go:380] Generated self-signed cert in-memory
	I0420 01:11:19.179036       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0420 01:11:19.179156       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:11:19.181052       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0420 01:11:19.181339       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0420 01:11:19.181416       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0420 01:11:19.182039       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [3c171416879a3f9cd8e9cdba12d01d577e522a14cde4e31e0d4b64e7b8d4a553] <==
	I0420 01:11:46.709472       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0420 01:11:46.709549       1 shared_informer.go:320] Caches are synced for disruption
	I0420 01:11:46.709594       1 shared_informer.go:320] Caches are synced for ephemeral
	I0420 01:11:46.711030       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0420 01:11:46.715331       1 shared_informer.go:320] Caches are synced for deployment
	I0420 01:11:46.718522       1 shared_informer.go:320] Caches are synced for node
	I0420 01:11:46.718602       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0420 01:11:46.718650       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0420 01:11:46.718679       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0420 01:11:46.718701       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0420 01:11:46.720335       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0420 01:11:46.723136       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0420 01:11:46.729449       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0420 01:11:46.755786       1 shared_informer.go:320] Caches are synced for stateful set
	I0420 01:11:46.767291       1 shared_informer.go:320] Caches are synced for HPA
	I0420 01:11:46.785318       1 shared_informer.go:320] Caches are synced for persistent volume
	I0420 01:11:46.789425       1 shared_informer.go:320] Caches are synced for PV protection
	I0420 01:11:46.815873       1 shared_informer.go:320] Caches are synced for attach detach
	I0420 01:11:46.914097       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0420 01:11:46.933506       1 shared_informer.go:320] Caches are synced for resource quota
	I0420 01:11:46.942301       1 shared_informer.go:320] Caches are synced for resource quota
	I0420 01:11:46.969324       1 shared_informer.go:320] Caches are synced for endpoint
	I0420 01:11:47.350434       1 shared_informer.go:320] Caches are synced for garbage collector
	I0420 01:11:47.350563       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0420 01:11:47.388460       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [59845b36c60dcd90fbf451ae7f951c66655e974a6754e09549e907c0f0209176] <==
	I0420 01:10:20.213351       1 server_linux.go:69] "Using iptables proxy"
	I0420 01:10:20.226755       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.180"]
	I0420 01:10:20.284850       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 01:10:20.284922       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 01:10:20.284941       1 server_linux.go:165] "Using iptables Proxier"
	I0420 01:10:20.288007       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 01:10:20.288189       1 server.go:872] "Version info" version="v1.30.0"
	I0420 01:10:20.288285       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:10:20.289313       1 config.go:192] "Starting service config controller"
	I0420 01:10:20.289458       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 01:10:20.289486       1 config.go:101] "Starting endpoint slice config controller"
	I0420 01:10:20.289489       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 01:10:20.290047       1 config.go:319] "Starting node config controller"
	I0420 01:10:20.290086       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 01:10:20.390571       1 shared_informer.go:320] Caches are synced for node config
	I0420 01:10:20.390658       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 01:10:20.390620       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [c78e339dfe36773be10f8568132cffb783135d3f41e884a520c3c234fe4fc8e6] <==
	I0420 01:11:18.919831       1 server_linux.go:69] "Using iptables proxy"
	E0420 01:11:29.899191       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-680144\": dial tcp 192.168.72.180:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.72.180:52516->192.168.72.180:8443: read: connection reset by peer"
	E0420 01:11:30.917134       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-680144\": dial tcp 192.168.72.180:8443: connect: connection refused"
	I0420 01:11:34.427176       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.180"]
	I0420 01:11:34.497768       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 01:11:34.497852       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 01:11:34.497873       1 server_linux.go:165] "Using iptables Proxier"
	I0420 01:11:34.501144       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 01:11:34.501675       1 server.go:872] "Version info" version="v1.30.0"
	I0420 01:11:34.501724       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:11:34.503912       1 config.go:192] "Starting service config controller"
	I0420 01:11:34.503955       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 01:11:34.503981       1 config.go:101] "Starting endpoint slice config controller"
	I0420 01:11:34.503985       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 01:11:34.505903       1 config.go:319] "Starting node config controller"
	I0420 01:11:34.508834       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 01:11:34.604310       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 01:11:34.604460       1 shared_informer.go:320] Caches are synced for service config
	I0420 01:11:34.613360       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2ce651de3c308355c6e3bc9b00a084e29c5ec8675ac179ab77f2039a1ce31980] <==
	
	
	==> kube-scheduler [58f6fedf5a80c775d4a0d66196e9e813c9f09c14c904c68a15172607ecc890d0] <==
	I0420 01:11:32.280821       1 serving.go:380] Generated self-signed cert in-memory
	W0420 01:11:34.296736       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0420 01:11:34.296805       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 01:11:34.296817       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0420 01:11:34.296828       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0420 01:11:34.360176       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0420 01:11:34.364293       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:11:34.374124       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0420 01:11:34.374441       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0420 01:11:34.374548       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0420 01:11:34.374604       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0420 01:11:34.475808       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 20 01:11:30 pause-680144 kubelet[3284]: I0420 01:11:30.830040    3284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/794c4a62a0b10913407c7946e3fa7672-ca-certs\") pod \"kube-apiserver-pause-680144\" (UID: \"794c4a62a0b10913407c7946e3fa7672\") " pod="kube-system/kube-apiserver-pause-680144"
	Apr 20 01:11:30 pause-680144 kubelet[3284]: I0420 01:11:30.830056    3284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c7753d065e8b91b151adc80443b939d-ca-certs\") pod \"kube-controller-manager-pause-680144\" (UID: \"6c7753d065e8b91b151adc80443b939d\") " pod="kube-system/kube-controller-manager-pause-680144"
	Apr 20 01:11:30 pause-680144 kubelet[3284]: E0420 01:11:30.830363    3284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-680144?timeout=10s\": dial tcp 192.168.72.180:8443: connect: connection refused" interval="400ms"
	Apr 20 01:11:30 pause-680144 kubelet[3284]: I0420 01:11:30.925391    3284 kubelet_node_status.go:73] "Attempting to register node" node="pause-680144"
	Apr 20 01:11:30 pause-680144 kubelet[3284]: E0420 01:11:30.926462    3284 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.180:8443: connect: connection refused" node="pause-680144"
	Apr 20 01:11:31 pause-680144 kubelet[3284]: I0420 01:11:31.106890    3284 scope.go:117] "RemoveContainer" containerID="2d5a96f57a7369e2f12aeed9290333c2de48d98ad0fa87296ed60d6d23892d5b"
	Apr 20 01:11:31 pause-680144 kubelet[3284]: I0420 01:11:31.109698    3284 scope.go:117] "RemoveContainer" containerID="b10d8cec2caed1534aac8ce051bbc0f03ecaab1265f0ac3e1d754dc8d96061af"
	Apr 20 01:11:31 pause-680144 kubelet[3284]: I0420 01:11:31.110959    3284 scope.go:117] "RemoveContainer" containerID="12f76e07e9fd0a694df3fa853e27f3c0ebf6d407cfbb7593d46be1cbd277cb2c"
	Apr 20 01:11:31 pause-680144 kubelet[3284]: I0420 01:11:31.111440    3284 scope.go:117] "RemoveContainer" containerID="2ce651de3c308355c6e3bc9b00a084e29c5ec8675ac179ab77f2039a1ce31980"
	Apr 20 01:11:31 pause-680144 kubelet[3284]: E0420 01:11:31.232580    3284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-680144?timeout=10s\": dial tcp 192.168.72.180:8443: connect: connection refused" interval="800ms"
	Apr 20 01:11:31 pause-680144 kubelet[3284]: I0420 01:11:31.327991    3284 kubelet_node_status.go:73] "Attempting to register node" node="pause-680144"
	Apr 20 01:11:31 pause-680144 kubelet[3284]: E0420 01:11:31.329043    3284 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.180:8443: connect: connection refused" node="pause-680144"
	Apr 20 01:11:31 pause-680144 kubelet[3284]: W0420 01:11:31.460187    3284 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-680144&limit=500&resourceVersion=0": dial tcp 192.168.72.180:8443: connect: connection refused
	Apr 20 01:11:31 pause-680144 kubelet[3284]: E0420 01:11:31.460538    3284 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-680144&limit=500&resourceVersion=0": dial tcp 192.168.72.180:8443: connect: connection refused
	Apr 20 01:11:32 pause-680144 kubelet[3284]: I0420 01:11:32.132103    3284 kubelet_node_status.go:73] "Attempting to register node" node="pause-680144"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.443628    3284 kubelet_node_status.go:112] "Node was previously registered" node="pause-680144"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.444107    3284 kubelet_node_status.go:76] "Successfully registered node" node="pause-680144"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.446567    3284 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.448099    3284 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.605383    3284 apiserver.go:52] "Watching apiserver"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.610619    3284 topology_manager.go:215] "Topology Admit Handler" podUID="35b4f9fc-975d-4d43-ade9-2818e1771e07" podNamespace="kube-system" podName="coredns-7db6d8ff4d-n2xqv"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.612352    3284 topology_manager.go:215] "Topology Admit Handler" podUID="45a3f948-63b9-45b8-961e-b1b573c2b862" podNamespace="kube-system" podName="kube-proxy-jndg6"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.620878    3284 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.715499    3284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45a3f948-63b9-45b8-961e-b1b573c2b862-xtables-lock\") pod \"kube-proxy-jndg6\" (UID: \"45a3f948-63b9-45b8-961e-b1b573c2b862\") " pod="kube-system/kube-proxy-jndg6"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.715619    3284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45a3f948-63b9-45b8-961e-b1b573c2b862-lib-modules\") pod \"kube-proxy-jndg6\" (UID: \"45a3f948-63b9-45b8-961e-b1b573c2b862\") " pod="kube-system/kube-proxy-jndg6"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-680144 -n pause-680144
helpers_test.go:261: (dbg) Run:  kubectl --context pause-680144 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-680144 -n pause-680144
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-680144 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-680144 logs -n 25: (1.949975592s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-345460                         | kubernetes-upgrade-345460 | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	| start   | -p kubernetes-upgrade-345460                         | kubernetes-upgrade-345460 | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo cat                              | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | /etc/nsswitch.conf                                   |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo cat                              | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | /etc/hosts                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo cat                              | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | /etc/resolv.conf                                     |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo crictl                           | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | pods                                                 |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo crictl ps                        | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | --all                                                |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo find                             | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | /etc/cni -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo ip a s                           | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	| ssh     | -p auto-831611 sudo ip r s                           | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	| ssh     | -p auto-831611 sudo                                  | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | iptables-save                                        |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo iptables                         | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | -t nat -L -n -v                                      |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo systemctl                        | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | status kubelet --all --full                          |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo journalctl                       | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | -xeu kubelet --all --full                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo cat                              | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo cat                              | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo systemctl                        | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC |                     |
	|         | status docker --all --full                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo systemctl                        | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | cat docker --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo cat                              | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p bridge-831611 pgrep -a                            | bridge-831611             | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo docker                           | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo systemctl                        | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC |                     |
	|         | status cri-docker --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo systemctl                        | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC | 20 Apr 24 01:11 UTC |
	|         | cat cri-docker --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo cat                              | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p auto-831611 sudo cat                              | auto-831611               | jenkins | v1.33.0 | 20 Apr 24 01:11 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 01:11:45
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 01:11:45.736759  126811 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:11:45.737023  126811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:11:45.737036  126811 out.go:304] Setting ErrFile to fd 2...
	I0420 01:11:45.737043  126811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:11:45.737297  126811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:11:45.737982  126811 out.go:298] Setting JSON to false
	I0420 01:11:45.738873  126811 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14053,"bootTime":1713561453,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 01:11:45.738935  126811 start.go:139] virtualization: kvm guest
	I0420 01:11:45.741464  126811 out.go:177] * [kubernetes-upgrade-345460] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 01:11:45.742900  126811 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:11:45.742948  126811 notify.go:220] Checking for updates...
	I0420 01:11:45.744243  126811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:11:45.745616  126811 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:11:45.747001  126811 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:11:45.748399  126811 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 01:11:45.749687  126811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:11:45.751369  126811 config.go:182] Loaded profile config "kubernetes-upgrade-345460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:11:45.751805  126811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:11:45.751869  126811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:11:45.766642  126811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45239
	I0420 01:11:45.767026  126811 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:11:45.767534  126811 main.go:141] libmachine: Using API Version  1
	I0420 01:11:45.767558  126811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:11:45.767864  126811 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:11:45.768043  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:11:45.768271  126811 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:11:45.768593  126811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:11:45.768636  126811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:11:45.782928  126811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36141
	I0420 01:11:45.783403  126811 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:11:45.783926  126811 main.go:141] libmachine: Using API Version  1
	I0420 01:11:45.783967  126811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:11:45.784266  126811 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:11:45.784495  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:11:45.820623  126811 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 01:11:45.822310  126811 start.go:297] selected driver: kvm2
	I0420 01:11:45.822329  126811 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-345460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-up
grade-345460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:11:45.822459  126811 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:11:45.823398  126811 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:11:45.823479  126811 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 01:11:45.838764  126811 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 01:11:45.839203  126811 cni.go:84] Creating CNI manager for ""
	I0420 01:11:45.839227  126811 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:11:45.839280  126811 start.go:340] cluster config:
	{Name:kubernetes-upgrade-345460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-345460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:11:45.839379  126811 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:11:45.841340  126811 out.go:177] * Starting "kubernetes-upgrade-345460" primary control-plane node in "kubernetes-upgrade-345460" cluster
	I0420 01:11:45.842603  126811 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:11:45.842640  126811 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0420 01:11:45.842651  126811 cache.go:56] Caching tarball of preloaded images
	I0420 01:11:45.842728  126811 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 01:11:45.842738  126811 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 01:11:45.842853  126811 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kubernetes-upgrade-345460/config.json ...
	I0420 01:11:45.843082  126811 start.go:360] acquireMachinesLock for kubernetes-upgrade-345460: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:11:45.843137  126811 start.go:364] duration metric: took 35.038µs to acquireMachinesLock for "kubernetes-upgrade-345460"
	I0420 01:11:45.843157  126811 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:11:45.843163  126811 fix.go:54] fixHost starting: 
	I0420 01:11:45.843917  126811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:11:45.843966  126811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:11:45.860884  126811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35125
	I0420 01:11:45.861396  126811 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:11:45.861896  126811 main.go:141] libmachine: Using API Version  1
	I0420 01:11:45.861913  126811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:11:45.862269  126811 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:11:45.862456  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	I0420 01:11:45.862616  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .GetState
	I0420 01:11:45.864219  126811 fix.go:112] recreateIfNeeded on kubernetes-upgrade-345460: state=Stopped err=<nil>
	I0420 01:11:45.864240  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .DriverName
	W0420 01:11:45.864408  126811 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:11:45.866292  126811 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-345460" ...
	I0420 01:11:42.527680  126402 pod_ready.go:102] pod "etcd-pause-680144" in "kube-system" namespace has status "Ready":"False"
	I0420 01:11:45.022210  126402 pod_ready.go:102] pod "etcd-pause-680144" in "kube-system" namespace has status "Ready":"False"
	I0420 01:11:45.328178  125761 pod_ready.go:102] pod "coredns-7db6d8ff4d-7jz9v" in "kube-system" namespace has status "Ready":"False"
	I0420 01:11:47.329061  125761 pod_ready.go:102] pod "coredns-7db6d8ff4d-7jz9v" in "kube-system" namespace has status "Ready":"False"
	I0420 01:11:47.521963  126402 pod_ready.go:102] pod "etcd-pause-680144" in "kube-system" namespace has status "Ready":"False"
	I0420 01:11:48.021387  126402 pod_ready.go:92] pod "etcd-pause-680144" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:48.021411  126402 pod_ready.go:81] duration metric: took 12.007987234s for pod "etcd-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.021421  126402 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.026793  126402 pod_ready.go:92] pod "kube-apiserver-pause-680144" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:48.026818  126402 pod_ready.go:81] duration metric: took 5.389306ms for pod "kube-apiserver-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.026833  126402 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.534862  126402 pod_ready.go:92] pod "kube-controller-manager-pause-680144" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:48.534886  126402 pod_ready.go:81] duration metric: took 508.044758ms for pod "kube-controller-manager-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.534897  126402 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jndg6" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.542255  126402 pod_ready.go:92] pod "kube-proxy-jndg6" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:48.542276  126402 pod_ready.go:81] duration metric: took 7.372371ms for pod "kube-proxy-jndg6" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.542287  126402 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.548178  126402 pod_ready.go:92] pod "kube-scheduler-pause-680144" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:48.548198  126402 pod_ready.go:81] duration metric: took 5.903736ms for pod "kube-scheduler-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:48.548207  126402 pod_ready.go:38] duration metric: took 12.546135403s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:11:48.548229  126402 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:11:48.567062  126402 ops.go:34] apiserver oom_adj: -16
	I0420 01:11:48.567085  126402 kubeadm.go:591] duration metric: took 30.198604506s to restartPrimaryControlPlane
	I0420 01:11:48.567094  126402 kubeadm.go:393] duration metric: took 30.477793878s to StartCluster
	I0420 01:11:48.567119  126402 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:11:48.567172  126402 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:11:48.568634  126402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:11:48.568877  126402 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:11:48.570650  126402 out.go:177] * Verifying Kubernetes components...
	I0420 01:11:48.568945  126402 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:11:48.569111  126402 config.go:182] Loaded profile config "pause-680144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:11:48.572102  126402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:11:48.573779  126402 out.go:177] * Enabled addons: 
	I0420 01:11:45.867693  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Calling .Start
	I0420 01:11:45.867879  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Ensuring networks are active...
	I0420 01:11:45.868775  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Ensuring network default is active
	I0420 01:11:45.869148  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Ensuring network mk-kubernetes-upgrade-345460 is active
	I0420 01:11:45.869673  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Getting domain xml...
	I0420 01:11:45.870370  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Creating domain...
	I0420 01:11:47.191313  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) Waiting to get IP...
	I0420 01:11:47.192428  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:11:47.192906  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:11:47.192972  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:11:47.192880  126846 retry.go:31] will retry after 278.355914ms: waiting for machine to come up
	I0420 01:11:47.472383  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:11:47.472947  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:11:47.472977  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:11:47.472892  126846 retry.go:31] will retry after 307.613676ms: waiting for machine to come up
	I0420 01:11:47.782455  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:11:47.783039  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:11:47.783071  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:11:47.782980  126846 retry.go:31] will retry after 409.24697ms: waiting for machine to come up
	I0420 01:11:48.193553  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:11:48.194125  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:11:48.194151  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:11:48.194019  126846 retry.go:31] will retry after 606.575733ms: waiting for machine to come up
	I0420 01:11:48.802269  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:11:48.803033  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:11:48.803069  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:11:48.802883  126846 retry.go:31] will retry after 619.6654ms: waiting for machine to come up
	I0420 01:11:49.424362  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:11:49.424948  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:11:49.425004  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:11:49.424934  126846 retry.go:31] will retry after 807.006627ms: waiting for machine to come up
	I0420 01:11:50.233908  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | domain kubernetes-upgrade-345460 has defined MAC address 52:54:00:d3:00:79 in network mk-kubernetes-upgrade-345460
	I0420 01:11:50.234600  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | unable to find current IP address of domain kubernetes-upgrade-345460 in network mk-kubernetes-upgrade-345460
	I0420 01:11:50.234628  126811 main.go:141] libmachine: (kubernetes-upgrade-345460) DBG | I0420 01:11:50.234519  126846 retry.go:31] will retry after 867.385171ms: waiting for machine to come up
	I0420 01:11:48.575269  126402 addons.go:505] duration metric: took 6.321605ms for enable addons: enabled=[]
	I0420 01:11:48.792367  126402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:11:48.813776  126402 node_ready.go:35] waiting up to 6m0s for node "pause-680144" to be "Ready" ...
	I0420 01:11:48.816851  126402 node_ready.go:49] node "pause-680144" has status "Ready":"True"
	I0420 01:11:48.816873  126402 node_ready.go:38] duration metric: took 3.065804ms for node "pause-680144" to be "Ready" ...
	I0420 01:11:48.816885  126402 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:11:48.823145  126402 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n2xqv" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:49.217842  126402 pod_ready.go:92] pod "coredns-7db6d8ff4d-n2xqv" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:49.217870  126402 pod_ready.go:81] duration metric: took 394.700706ms for pod "coredns-7db6d8ff4d-n2xqv" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:49.217883  126402 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:49.619666  126402 pod_ready.go:92] pod "etcd-pause-680144" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:49.619693  126402 pod_ready.go:81] duration metric: took 401.801603ms for pod "etcd-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:49.619706  126402 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:50.019286  126402 pod_ready.go:92] pod "kube-apiserver-pause-680144" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:50.019313  126402 pod_ready.go:81] duration metric: took 399.599415ms for pod "kube-apiserver-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:50.019323  126402 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:50.419722  126402 pod_ready.go:92] pod "kube-controller-manager-pause-680144" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:50.419745  126402 pod_ready.go:81] duration metric: took 400.415528ms for pod "kube-controller-manager-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:50.419755  126402 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jndg6" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:50.818025  126402 pod_ready.go:92] pod "kube-proxy-jndg6" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:50.818049  126402 pod_ready.go:81] duration metric: took 398.287987ms for pod "kube-proxy-jndg6" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:50.818058  126402 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:51.218062  126402 pod_ready.go:92] pod "kube-scheduler-pause-680144" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:51.218088  126402 pod_ready.go:81] duration metric: took 400.021355ms for pod "kube-scheduler-pause-680144" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:51.218097  126402 pod_ready.go:38] duration metric: took 2.401199295s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:11:51.218113  126402 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:11:51.218160  126402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:11:51.235074  126402 api_server.go:72] duration metric: took 2.666165937s to wait for apiserver process to appear ...
	I0420 01:11:51.235098  126402 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:11:51.235120  126402 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I0420 01:11:51.240233  126402 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I0420 01:11:51.241328  126402 api_server.go:141] control plane version: v1.30.0
	I0420 01:11:51.241355  126402 api_server.go:131] duration metric: took 6.249294ms to wait for apiserver health ...
	I0420 01:11:51.241365  126402 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:11:51.421764  126402 system_pods.go:59] 6 kube-system pods found
	I0420 01:11:51.421792  126402 system_pods.go:61] "coredns-7db6d8ff4d-n2xqv" [35b4f9fc-975d-4d43-ade9-2818e1771e07] Running
	I0420 01:11:51.421802  126402 system_pods.go:61] "etcd-pause-680144" [efd2b2a8-8eda-418a-99d1-19d4090fbdca] Running
	I0420 01:11:51.421808  126402 system_pods.go:61] "kube-apiserver-pause-680144" [dacc3996-946e-4c5b-b960-efc8148d4a2d] Running
	I0420 01:11:51.421813  126402 system_pods.go:61] "kube-controller-manager-pause-680144" [90c81c0e-75d3-441d-b6b4-19a28ad116cc] Running
	I0420 01:11:51.421823  126402 system_pods.go:61] "kube-proxy-jndg6" [45a3f948-63b9-45b8-961e-b1b573c2b862] Running
	I0420 01:11:51.421827  126402 system_pods.go:61] "kube-scheduler-pause-680144" [e4284add-d032-4f00-a04e-059fcc279f4e] Running
	I0420 01:11:51.421836  126402 system_pods.go:74] duration metric: took 180.463976ms to wait for pod list to return data ...
	I0420 01:11:51.421850  126402 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:11:51.619486  126402 default_sa.go:45] found service account: "default"
	I0420 01:11:51.619516  126402 default_sa.go:55] duration metric: took 197.654132ms for default service account to be created ...
	I0420 01:11:51.619527  126402 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:11:51.823334  126402 system_pods.go:86] 6 kube-system pods found
	I0420 01:11:51.823374  126402 system_pods.go:89] "coredns-7db6d8ff4d-n2xqv" [35b4f9fc-975d-4d43-ade9-2818e1771e07] Running
	I0420 01:11:51.823384  126402 system_pods.go:89] "etcd-pause-680144" [efd2b2a8-8eda-418a-99d1-19d4090fbdca] Running
	I0420 01:11:51.823401  126402 system_pods.go:89] "kube-apiserver-pause-680144" [dacc3996-946e-4c5b-b960-efc8148d4a2d] Running
	I0420 01:11:51.823410  126402 system_pods.go:89] "kube-controller-manager-pause-680144" [90c81c0e-75d3-441d-b6b4-19a28ad116cc] Running
	I0420 01:11:51.823425  126402 system_pods.go:89] "kube-proxy-jndg6" [45a3f948-63b9-45b8-961e-b1b573c2b862] Running
	I0420 01:11:51.823439  126402 system_pods.go:89] "kube-scheduler-pause-680144" [e4284add-d032-4f00-a04e-059fcc279f4e] Running
	I0420 01:11:51.823450  126402 system_pods.go:126] duration metric: took 203.914771ms to wait for k8s-apps to be running ...
	I0420 01:11:51.823470  126402 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:11:51.823536  126402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:11:51.849267  126402 system_svc.go:56] duration metric: took 25.790354ms WaitForService to wait for kubelet
	I0420 01:11:51.849293  126402 kubeadm.go:576] duration metric: took 3.280387005s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:11:51.849324  126402 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:11:52.018536  126402 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:11:52.018568  126402 node_conditions.go:123] node cpu capacity is 2
	I0420 01:11:52.018583  126402 node_conditions.go:105] duration metric: took 169.242351ms to run NodePressure ...
	I0420 01:11:52.018602  126402 start.go:240] waiting for startup goroutines ...
	I0420 01:11:52.018620  126402 start.go:245] waiting for cluster config update ...
	I0420 01:11:52.018634  126402 start.go:254] writing updated cluster config ...
	I0420 01:11:52.018939  126402 ssh_runner.go:195] Run: rm -f paused
	I0420 01:11:52.078955  126402 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:11:52.081819  126402 out.go:177] * Done! kubectl is now configured to use "pause-680144" cluster and "default" namespace by default
	I0420 01:11:49.331140  125761 pod_ready.go:102] pod "coredns-7db6d8ff4d-7jz9v" in "kube-system" namespace has status "Ready":"False"
	I0420 01:11:51.831111  125761 pod_ready.go:102] pod "coredns-7db6d8ff4d-7jz9v" in "kube-system" namespace has status "Ready":"False"
	I0420 01:11:52.329788  125761 pod_ready.go:92] pod "coredns-7db6d8ff4d-7jz9v" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:52.329819  125761 pod_ready.go:81] duration metric: took 39.508077461s for pod "coredns-7db6d8ff4d-7jz9v" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:52.329833  125761 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-glck4" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:52.336374  125761 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-glck4" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-glck4" not found
	I0420 01:11:52.336403  125761 pod_ready.go:81] duration metric: took 6.561191ms for pod "coredns-7db6d8ff4d-glck4" in "kube-system" namespace to be "Ready" ...
	E0420 01:11:52.336417  125761 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-glck4" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-glck4" not found
	I0420 01:11:52.336425  125761 pod_ready.go:78] waiting up to 15m0s for pod "etcd-bridge-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:52.344591  125761 pod_ready.go:92] pod "etcd-bridge-831611" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:52.344610  125761 pod_ready.go:81] duration metric: took 8.177051ms for pod "etcd-bridge-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:52.344622  125761 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-bridge-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:52.352533  125761 pod_ready.go:92] pod "kube-apiserver-bridge-831611" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:52.352555  125761 pod_ready.go:81] duration metric: took 7.924998ms for pod "kube-apiserver-bridge-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:52.352568  125761 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-bridge-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:52.364359  125761 pod_ready.go:92] pod "kube-controller-manager-bridge-831611" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:52.364376  125761 pod_ready.go:81] duration metric: took 11.800531ms for pod "kube-controller-manager-bridge-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:52.364384  125761 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-2qscs" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:52.526815  125761 pod_ready.go:92] pod "kube-proxy-2qscs" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:52.526841  125761 pod_ready.go:81] duration metric: took 162.449982ms for pod "kube-proxy-2qscs" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:52.526853  125761 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-bridge-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:52.926523  125761 pod_ready.go:92] pod "kube-scheduler-bridge-831611" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:52.926542  125761 pod_ready.go:81] duration metric: took 399.681834ms for pod "kube-scheduler-bridge-831611" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:52.926550  125761 pod_ready.go:38] duration metric: took 40.131341937s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:11:52.926564  125761 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:11:52.926607  125761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:11:52.946518  125761 api_server.go:72] duration metric: took 40.985720552s to wait for apiserver process to appear ...
	I0420 01:11:52.946539  125761 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:11:52.946559  125761 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8443/healthz ...
	I0420 01:11:52.951186  125761 api_server.go:279] https://192.168.61.206:8443/healthz returned 200:
	ok
	I0420 01:11:52.952528  125761 api_server.go:141] control plane version: v1.30.0
	I0420 01:11:52.952551  125761 api_server.go:131] duration metric: took 6.00383ms to wait for apiserver health ...
	I0420 01:11:52.952560  125761 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:11:53.132505  125761 system_pods.go:59] 7 kube-system pods found
	I0420 01:11:53.132555  125761 system_pods.go:61] "coredns-7db6d8ff4d-7jz9v" [d1edcb4c-c51e-4bf4-828a-09d1ad477feb] Running
	I0420 01:11:53.132565  125761 system_pods.go:61] "etcd-bridge-831611" [2b89a866-412b-416f-ab32-a909562ea7f5] Running
	I0420 01:11:53.132571  125761 system_pods.go:61] "kube-apiserver-bridge-831611" [302a96b3-80d5-4380-a8ec-dc31470e954d] Running
	I0420 01:11:53.132577  125761 system_pods.go:61] "kube-controller-manager-bridge-831611" [a1ed8f1c-35ee-419a-8858-2c12365f5a50] Running
	I0420 01:11:53.132583  125761 system_pods.go:61] "kube-proxy-2qscs" [5b1f9772-7c20-422e-b9a5-bda0d7f03d35] Running
	I0420 01:11:53.132589  125761 system_pods.go:61] "kube-scheduler-bridge-831611" [4daeefcb-98fb-4913-ae0f-0cfd0334feb6] Running
	I0420 01:11:53.132594  125761 system_pods.go:61] "storage-provisioner" [d2b0f7d3-ea3b-46e4-b3e5-2055d22c152a] Running
	I0420 01:11:53.132605  125761 system_pods.go:74] duration metric: took 180.038045ms to wait for pod list to return data ...
	I0420 01:11:53.132619  125761 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:11:53.327382  125761 default_sa.go:45] found service account: "default"
	I0420 01:11:53.327408  125761 default_sa.go:55] duration metric: took 194.780483ms for default service account to be created ...
	I0420 01:11:53.327417  125761 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:11:53.532681  125761 system_pods.go:86] 7 kube-system pods found
	I0420 01:11:53.532716  125761 system_pods.go:89] "coredns-7db6d8ff4d-7jz9v" [d1edcb4c-c51e-4bf4-828a-09d1ad477feb] Running
	I0420 01:11:53.532724  125761 system_pods.go:89] "etcd-bridge-831611" [2b89a866-412b-416f-ab32-a909562ea7f5] Running
	I0420 01:11:53.532731  125761 system_pods.go:89] "kube-apiserver-bridge-831611" [302a96b3-80d5-4380-a8ec-dc31470e954d] Running
	I0420 01:11:53.532737  125761 system_pods.go:89] "kube-controller-manager-bridge-831611" [a1ed8f1c-35ee-419a-8858-2c12365f5a50] Running
	I0420 01:11:53.532744  125761 system_pods.go:89] "kube-proxy-2qscs" [5b1f9772-7c20-422e-b9a5-bda0d7f03d35] Running
	I0420 01:11:53.532749  125761 system_pods.go:89] "kube-scheduler-bridge-831611" [4daeefcb-98fb-4913-ae0f-0cfd0334feb6] Running
	I0420 01:11:53.532755  125761 system_pods.go:89] "storage-provisioner" [d2b0f7d3-ea3b-46e4-b3e5-2055d22c152a] Running
	I0420 01:11:53.532764  125761 system_pods.go:126] duration metric: took 205.34071ms to wait for k8s-apps to be running ...
	I0420 01:11:53.532774  125761 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:11:53.532822  125761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:11:53.552857  125761 system_svc.go:56] duration metric: took 20.071754ms WaitForService to wait for kubelet
	I0420 01:11:53.552890  125761 kubeadm.go:576] duration metric: took 41.592095578s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:11:53.552915  125761 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:11:53.727610  125761 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:11:53.727640  125761 node_conditions.go:123] node cpu capacity is 2
	I0420 01:11:53.727654  125761 node_conditions.go:105] duration metric: took 174.732704ms to run NodePressure ...
	I0420 01:11:53.727667  125761 start.go:240] waiting for startup goroutines ...
	I0420 01:11:53.727677  125761 start.go:245] waiting for cluster config update ...
	I0420 01:11:53.727689  125761 start.go:254] writing updated cluster config ...
	I0420 01:11:53.727955  125761 ssh_runner.go:195] Run: rm -f paused
	I0420 01:11:53.805906  125761 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:11:53.807918  125761 out.go:177] * Done! kubectl is now configured to use "bridge-831611" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.477140637Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e933055a-d3c2-4e04-8049-ead64b389bfc name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.477450979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7079c73478b7fdcc2ac6783b2f1823f580d6c983713f09e3474517f167559b1,PodSandboxId:e8cb5da210123818ee46fa4cdf87fbfd8c59c35cf82c76e73d4f7fc3ac07ae40,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713575491146851437,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 610fc688abee6d1434e7d2e556fad82d,},Annotations:map[string]string{io.kubernetes.container.hash: fb3a49a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58f6fedf5a80c775d4a0d66196e9e813c9f09c14c904c68a15172607ecc890d0,PodSandboxId:70a600126c5a247b13bd7e172de1a47d0677fa1eee824a0f6eddc5e9ac1e8ef2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713575491162448940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0ab0576b25686ea1d2dcabab1c014,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c171416879a3f9cd8e9cdba12d01d577e522a14cde4e31e0d4b64e7b8d4a553,PodSandboxId:46a3b94b2804ca83ada0a679eaf55e4c5e36dd3f8fa39d3dac4544cccd5fd5fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713575491143740578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7753d065e8b91b151adc80443b939d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b091fcd81d20aa572548f07f59b0e8e0a54d0e3fa4b0f484b9324efb57918d,PodSandboxId:e47f531de3f237207159f867ef3534f029825c2c8d651957711647d4e13fda3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713575491130451085,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 794c4a62a0b10913407c7946e3fa7672,},Annotations:map[string]string{io.kubernetes.container.hash: 603c03d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229e3f712b81762e37afbc46833638c0675d1021f616c0247bebce064605fdf2,PodSandboxId:dc4a949cfd3a4b4f906ef02e1bac52af01e00a4001ded750ff0363ac906f1e6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713575478299552441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2xqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b4f9fc-975d-4d43-ade9-2818e1771e07,},Annotations:map[string]string{io.kubernetes.container.hash: 3a494162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78e339dfe36773be10f8568132cffb783135d3f41e884a520c3c234fe4fc8e6,PodSandboxId:e7b87a2b6b167fde0f91587d57d2431040a62f4856bc1a1349fedcd77278ce7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713575477557417087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jndg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a3f948-63b9-45b8-961e-b1b573c2b862,},Annotations:map[string]string{io
.kubernetes.container.hash: e3004a64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce651de3c308355c6e3bc9b00a084e29c5ec8675ac179ab77f2039a1ce31980,PodSandboxId:70a600126c5a247b13bd7e172de1a47d0677fa1eee824a0f6eddc5e9ac1e8ef2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713575477462029965,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0ab0576b25686ea1d2dcabab1c014,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5a96f57a7369e2f12aeed9290333c2de48d98ad0fa87296ed60d6d23892d5b,PodSandboxId:e8cb5da210123818ee46fa4cdf87fbfd8c59c35cf82c76e73d4f7fc3ac07ae40,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713575477365903107,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 610fc688abee6d1434e7d2e556fad82d,},Annotations:map[string]string{io.kubernetes.container.hash: fb3a49a8,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10d8cec2caed1534aac8ce051bbc0f03ecaab1265f0ac3e1d754dc8d96061af,PodSandboxId:e47f531de3f237207159f867ef3534f029825c2c8d651957711647d4e13fda3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713575477314982781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 794c4a62a0b10913407c7946e3fa7672,},Annotations:map[string]string{io.kubernetes.container.hash: 603c03d9,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12f76e07e9fd0a694df3fa853e27f3c0ebf6d407cfbb7593d46be1cbd277cb2c,PodSandboxId:46a3b94b2804ca83ada0a679eaf55e4c5e36dd3f8fa39d3dac4544cccd5fd5fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713575477270781742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7753d065e8b91b151adc80443b939d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59845b36c60dcd90fbf451ae7f951c66655e974a6754e09549e907c0f0209176,PodSandboxId:da19d23d434d691293af78b242fe629130fd93d712a21dfb15e5f041331c1f1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713575419634736441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jndg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a3f948-63b9-45b8-961e-b1b573c2b862,},Annotations:map[string]string{io.kubernetes.container.hash: e3004a64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e88e57cc3c7b1a143078c19ab3ff4bb0fc4c078aeb8551cba7d13e089e4e2de,PodSandboxId:b8947ae0e6d919c8660da9dda59e5d231f54b09b47c8c960bdfd4e1ac2124e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713575419723517300,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2xqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b4f9fc-975d-4d43-ade9-2818e1771e07,},Annotations:map[string]string{io.kubernetes.container.hash: 3a494162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e933055a-d3c2-4e04-8049-ead64b389bfc name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.539684223Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f76bdc08-1ad0-4bee-8d90-c353ab28023b name=/runtime.v1.RuntimeService/Version
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.539825152Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f76bdc08-1ad0-4bee-8d90-c353ab28023b name=/runtime.v1.RuntimeService/Version
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.543613326Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6c871e65-b036-4cb2-aaec-18ae2528f66c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.544519226Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713575515544417310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6c871e65-b036-4cb2-aaec-18ae2528f66c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.545193215Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0bbdf61-09c6-4921-b705-a2d320d068b0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.545376406Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0bbdf61-09c6-4921-b705-a2d320d068b0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.545722476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7079c73478b7fdcc2ac6783b2f1823f580d6c983713f09e3474517f167559b1,PodSandboxId:e8cb5da210123818ee46fa4cdf87fbfd8c59c35cf82c76e73d4f7fc3ac07ae40,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713575491146851437,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 610fc688abee6d1434e7d2e556fad82d,},Annotations:map[string]string{io.kubernetes.container.hash: fb3a49a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58f6fedf5a80c775d4a0d66196e9e813c9f09c14c904c68a15172607ecc890d0,PodSandboxId:70a600126c5a247b13bd7e172de1a47d0677fa1eee824a0f6eddc5e9ac1e8ef2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713575491162448940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0ab0576b25686ea1d2dcabab1c014,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c171416879a3f9cd8e9cdba12d01d577e522a14cde4e31e0d4b64e7b8d4a553,PodSandboxId:46a3b94b2804ca83ada0a679eaf55e4c5e36dd3f8fa39d3dac4544cccd5fd5fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713575491143740578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7753d065e8b91b151adc80443b939d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b091fcd81d20aa572548f07f59b0e8e0a54d0e3fa4b0f484b9324efb57918d,PodSandboxId:e47f531de3f237207159f867ef3534f029825c2c8d651957711647d4e13fda3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713575491130451085,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 794c4a62a0b10913407c7946e3fa7672,},Annotations:map[string]string{io.kubernetes.container.hash: 603c03d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229e3f712b81762e37afbc46833638c0675d1021f616c0247bebce064605fdf2,PodSandboxId:dc4a949cfd3a4b4f906ef02e1bac52af01e00a4001ded750ff0363ac906f1e6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713575478299552441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2xqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b4f9fc-975d-4d43-ade9-2818e1771e07,},Annotations:map[string]string{io.kubernetes.container.hash: 3a494162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78e339dfe36773be10f8568132cffb783135d3f41e884a520c3c234fe4fc8e6,PodSandboxId:e7b87a2b6b167fde0f91587d57d2431040a62f4856bc1a1349fedcd77278ce7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713575477557417087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jndg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a3f948-63b9-45b8-961e-b1b573c2b862,},Annotations:map[string]string{io
.kubernetes.container.hash: e3004a64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce651de3c308355c6e3bc9b00a084e29c5ec8675ac179ab77f2039a1ce31980,PodSandboxId:70a600126c5a247b13bd7e172de1a47d0677fa1eee824a0f6eddc5e9ac1e8ef2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713575477462029965,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0ab0576b25686ea1d2dcabab1c014,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5a96f57a7369e2f12aeed9290333c2de48d98ad0fa87296ed60d6d23892d5b,PodSandboxId:e8cb5da210123818ee46fa4cdf87fbfd8c59c35cf82c76e73d4f7fc3ac07ae40,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713575477365903107,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 610fc688abee6d1434e7d2e556fad82d,},Annotations:map[string]string{io.kubernetes.container.hash: fb3a49a8,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10d8cec2caed1534aac8ce051bbc0f03ecaab1265f0ac3e1d754dc8d96061af,PodSandboxId:e47f531de3f237207159f867ef3534f029825c2c8d651957711647d4e13fda3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713575477314982781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 794c4a62a0b10913407c7946e3fa7672,},Annotations:map[string]string{io.kubernetes.container.hash: 603c03d9,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12f76e07e9fd0a694df3fa853e27f3c0ebf6d407cfbb7593d46be1cbd277cb2c,PodSandboxId:46a3b94b2804ca83ada0a679eaf55e4c5e36dd3f8fa39d3dac4544cccd5fd5fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713575477270781742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7753d065e8b91b151adc80443b939d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59845b36c60dcd90fbf451ae7f951c66655e974a6754e09549e907c0f0209176,PodSandboxId:da19d23d434d691293af78b242fe629130fd93d712a21dfb15e5f041331c1f1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713575419634736441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jndg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a3f948-63b9-45b8-961e-b1b573c2b862,},Annotations:map[string]string{io.kubernetes.container.hash: e3004a64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e88e57cc3c7b1a143078c19ab3ff4bb0fc4c078aeb8551cba7d13e089e4e2de,PodSandboxId:b8947ae0e6d919c8660da9dda59e5d231f54b09b47c8c960bdfd4e1ac2124e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713575419723517300,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2xqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b4f9fc-975d-4d43-ade9-2818e1771e07,},Annotations:map[string]string{io.kubernetes.container.hash: 3a494162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0bbdf61-09c6-4921-b705-a2d320d068b0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.627957216Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d73b1e56-bdbf-446f-9547-930fd7537248 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.628033379Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d73b1e56-bdbf-446f-9547-930fd7537248 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.631126692Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99394a50-4e16-4e29-8039-a26a0ffacf99 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.631705940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713575515631673471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99394a50-4e16-4e29-8039-a26a0ffacf99 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.633391885Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=108b318c-9c30-46e7-b08c-441ea0172a90 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.633517832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=108b318c-9c30-46e7-b08c-441ea0172a90 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.638476217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7079c73478b7fdcc2ac6783b2f1823f580d6c983713f09e3474517f167559b1,PodSandboxId:e8cb5da210123818ee46fa4cdf87fbfd8c59c35cf82c76e73d4f7fc3ac07ae40,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713575491146851437,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 610fc688abee6d1434e7d2e556fad82d,},Annotations:map[string]string{io.kubernetes.container.hash: fb3a49a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58f6fedf5a80c775d4a0d66196e9e813c9f09c14c904c68a15172607ecc890d0,PodSandboxId:70a600126c5a247b13bd7e172de1a47d0677fa1eee824a0f6eddc5e9ac1e8ef2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713575491162448940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0ab0576b25686ea1d2dcabab1c014,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c171416879a3f9cd8e9cdba12d01d577e522a14cde4e31e0d4b64e7b8d4a553,PodSandboxId:46a3b94b2804ca83ada0a679eaf55e4c5e36dd3f8fa39d3dac4544cccd5fd5fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713575491143740578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7753d065e8b91b151adc80443b939d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b091fcd81d20aa572548f07f59b0e8e0a54d0e3fa4b0f484b9324efb57918d,PodSandboxId:e47f531de3f237207159f867ef3534f029825c2c8d651957711647d4e13fda3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713575491130451085,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 794c4a62a0b10913407c7946e3fa7672,},Annotations:map[string]string{io.kubernetes.container.hash: 603c03d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229e3f712b81762e37afbc46833638c0675d1021f616c0247bebce064605fdf2,PodSandboxId:dc4a949cfd3a4b4f906ef02e1bac52af01e00a4001ded750ff0363ac906f1e6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713575478299552441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2xqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b4f9fc-975d-4d43-ade9-2818e1771e07,},Annotations:map[string]string{io.kubernetes.container.hash: 3a494162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78e339dfe36773be10f8568132cffb783135d3f41e884a520c3c234fe4fc8e6,PodSandboxId:e7b87a2b6b167fde0f91587d57d2431040a62f4856bc1a1349fedcd77278ce7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713575477557417087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jndg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a3f948-63b9-45b8-961e-b1b573c2b862,},Annotations:map[string]string{io
.kubernetes.container.hash: e3004a64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce651de3c308355c6e3bc9b00a084e29c5ec8675ac179ab77f2039a1ce31980,PodSandboxId:70a600126c5a247b13bd7e172de1a47d0677fa1eee824a0f6eddc5e9ac1e8ef2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713575477462029965,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0ab0576b25686ea1d2dcabab1c014,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5a96f57a7369e2f12aeed9290333c2de48d98ad0fa87296ed60d6d23892d5b,PodSandboxId:e8cb5da210123818ee46fa4cdf87fbfd8c59c35cf82c76e73d4f7fc3ac07ae40,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713575477365903107,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 610fc688abee6d1434e7d2e556fad82d,},Annotations:map[string]string{io.kubernetes.container.hash: fb3a49a8,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10d8cec2caed1534aac8ce051bbc0f03ecaab1265f0ac3e1d754dc8d96061af,PodSandboxId:e47f531de3f237207159f867ef3534f029825c2c8d651957711647d4e13fda3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713575477314982781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 794c4a62a0b10913407c7946e3fa7672,},Annotations:map[string]string{io.kubernetes.container.hash: 603c03d9,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12f76e07e9fd0a694df3fa853e27f3c0ebf6d407cfbb7593d46be1cbd277cb2c,PodSandboxId:46a3b94b2804ca83ada0a679eaf55e4c5e36dd3f8fa39d3dac4544cccd5fd5fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713575477270781742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7753d065e8b91b151adc80443b939d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59845b36c60dcd90fbf451ae7f951c66655e974a6754e09549e907c0f0209176,PodSandboxId:da19d23d434d691293af78b242fe629130fd93d712a21dfb15e5f041331c1f1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713575419634736441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jndg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a3f948-63b9-45b8-961e-b1b573c2b862,},Annotations:map[string]string{io.kubernetes.container.hash: e3004a64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e88e57cc3c7b1a143078c19ab3ff4bb0fc4c078aeb8551cba7d13e089e4e2de,PodSandboxId:b8947ae0e6d919c8660da9dda59e5d231f54b09b47c8c960bdfd4e1ac2124e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713575419723517300,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2xqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b4f9fc-975d-4d43-ade9-2818e1771e07,},Annotations:map[string]string{io.kubernetes.container.hash: 3a494162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=108b318c-9c30-46e7-b08c-441ea0172a90 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.707042216Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=e0543b90-370a-4438-84fa-f954dff0b497 name=/runtime.v1.RuntimeService/Status
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.707194225Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=e0543b90-370a-4438-84fa-f954dff0b497 name=/runtime.v1.RuntimeService/Status
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.715142065Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76bf9ddb-0e8b-424c-ad88-058d0f4d8e72 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.715386103Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76bf9ddb-0e8b-424c-ad88-058d0f4d8e72 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.716805634Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=761eafd2-ef20-4061-8493-22bf7a25386d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.717577992Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713575515717516438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=761eafd2-ef20-4061-8493-22bf7a25386d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.718984354Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fcd55b5-c059-4e04-bafd-aa1200629cd3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.719095848Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fcd55b5-c059-4e04-bafd-aa1200629cd3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:11:55 pause-680144 crio[2441]: time="2024-04-20 01:11:55.719587849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7079c73478b7fdcc2ac6783b2f1823f580d6c983713f09e3474517f167559b1,PodSandboxId:e8cb5da210123818ee46fa4cdf87fbfd8c59c35cf82c76e73d4f7fc3ac07ae40,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713575491146851437,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 610fc688abee6d1434e7d2e556fad82d,},Annotations:map[string]string{io.kubernetes.container.hash: fb3a49a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58f6fedf5a80c775d4a0d66196e9e813c9f09c14c904c68a15172607ecc890d0,PodSandboxId:70a600126c5a247b13bd7e172de1a47d0677fa1eee824a0f6eddc5e9ac1e8ef2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713575491162448940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0ab0576b25686ea1d2dcabab1c014,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c171416879a3f9cd8e9cdba12d01d577e522a14cde4e31e0d4b64e7b8d4a553,PodSandboxId:46a3b94b2804ca83ada0a679eaf55e4c5e36dd3f8fa39d3dac4544cccd5fd5fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713575491143740578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7753d065e8b91b151adc80443b939d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b091fcd81d20aa572548f07f59b0e8e0a54d0e3fa4b0f484b9324efb57918d,PodSandboxId:e47f531de3f237207159f867ef3534f029825c2c8d651957711647d4e13fda3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713575491130451085,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 794c4a62a0b10913407c7946e3fa7672,},Annotations:map[string]string{io.kubernetes.container.hash: 603c03d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229e3f712b81762e37afbc46833638c0675d1021f616c0247bebce064605fdf2,PodSandboxId:dc4a949cfd3a4b4f906ef02e1bac52af01e00a4001ded750ff0363ac906f1e6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713575478299552441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2xqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b4f9fc-975d-4d43-ade9-2818e1771e07,},Annotations:map[string]string{io.kubernetes.container.hash: 3a494162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78e339dfe36773be10f8568132cffb783135d3f41e884a520c3c234fe4fc8e6,PodSandboxId:e7b87a2b6b167fde0f91587d57d2431040a62f4856bc1a1349fedcd77278ce7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713575477557417087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jndg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a3f948-63b9-45b8-961e-b1b573c2b862,},Annotations:map[string]string{io
.kubernetes.container.hash: e3004a64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce651de3c308355c6e3bc9b00a084e29c5ec8675ac179ab77f2039a1ce31980,PodSandboxId:70a600126c5a247b13bd7e172de1a47d0677fa1eee824a0f6eddc5e9ac1e8ef2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713575477462029965,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba0ab0576b25686ea1d2dcabab1c014,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5a96f57a7369e2f12aeed9290333c2de48d98ad0fa87296ed60d6d23892d5b,PodSandboxId:e8cb5da210123818ee46fa4cdf87fbfd8c59c35cf82c76e73d4f7fc3ac07ae40,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713575477365903107,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 610fc688abee6d1434e7d2e556fad82d,},Annotations:map[string]string{io.kubernetes.container.hash: fb3a49a8,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10d8cec2caed1534aac8ce051bbc0f03ecaab1265f0ac3e1d754dc8d96061af,PodSandboxId:e47f531de3f237207159f867ef3534f029825c2c8d651957711647d4e13fda3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713575477314982781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 794c4a62a0b10913407c7946e3fa7672,},Annotations:map[string]string{io.kubernetes.container.hash: 603c03d9,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12f76e07e9fd0a694df3fa853e27f3c0ebf6d407cfbb7593d46be1cbd277cb2c,PodSandboxId:46a3b94b2804ca83ada0a679eaf55e4c5e36dd3f8fa39d3dac4544cccd5fd5fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713575477270781742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-680144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7753d065e8b91b151adc80443b939d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59845b36c60dcd90fbf451ae7f951c66655e974a6754e09549e907c0f0209176,PodSandboxId:da19d23d434d691293af78b242fe629130fd93d712a21dfb15e5f041331c1f1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713575419634736441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jndg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a3f948-63b9-45b8-961e-b1b573c2b862,},Annotations:map[string]string{io.kubernetes.container.hash: e3004a64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e88e57cc3c7b1a143078c19ab3ff4bb0fc4c078aeb8551cba7d13e089e4e2de,PodSandboxId:b8947ae0e6d919c8660da9dda59e5d231f54b09b47c8c960bdfd4e1ac2124e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713575419723517300,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2xqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b4f9fc-975d-4d43-ade9-2818e1771e07,},Annotations:map[string]string{io.kubernetes.container.hash: 3a494162,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4fcd55b5-c059-4e04-bafd-aa1200629cd3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	58f6fedf5a80c       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   24 seconds ago       Running             kube-scheduler            2                   70a600126c5a2       kube-scheduler-pause-680144
	a7079c73478b7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago       Running             etcd                      2                   e8cb5da210123       etcd-pause-680144
	3c171416879a3       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   24 seconds ago       Running             kube-controller-manager   2                   46a3b94b2804c       kube-controller-manager-pause-680144
	57b091fcd81d2       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   24 seconds ago       Running             kube-apiserver            2                   e47f531de3f23       kube-apiserver-pause-680144
	229e3f712b817       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   37 seconds ago       Running             coredns                   1                   dc4a949cfd3a4       coredns-7db6d8ff4d-n2xqv
	c78e339dfe367       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   38 seconds ago       Running             kube-proxy                1                   e7b87a2b6b167       kube-proxy-jndg6
	2ce651de3c308       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   38 seconds ago       Exited              kube-scheduler            1                   70a600126c5a2       kube-scheduler-pause-680144
	2d5a96f57a736       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   38 seconds ago       Exited              etcd                      1                   e8cb5da210123       etcd-pause-680144
	b10d8cec2caed       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   38 seconds ago       Exited              kube-apiserver            1                   e47f531de3f23       kube-apiserver-pause-680144
	12f76e07e9fd0       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   38 seconds ago       Exited              kube-controller-manager   1                   46a3b94b2804c       kube-controller-manager-pause-680144
	1e88e57cc3c7b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   b8947ae0e6d91       coredns-7db6d8ff4d-n2xqv
	59845b36c60dc       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   About a minute ago   Exited              kube-proxy                0                   da19d23d434d6       kube-proxy-jndg6
	
	
	==> coredns [1e88e57cc3c7b1a143078c19ab3ff4bb0fc4c078aeb8551cba7d13e089e4e2de] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1232768171]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:10:20.216) (total time: 30003ms):
	Trace[1232768171]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (01:10:50.219)
	Trace[1232768171]: [30.003127668s] [30.003127668s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[51788946]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:10:20.216) (total time: 30003ms):
	Trace[51788946]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (01:10:50.220)
	Trace[51788946]: [30.003655733s] [30.003655733s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[621536973]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:10:20.219) (total time: 30002ms):
	Trace[621536973]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (01:10:50.221)
	Trace[621536973]: [30.002519428s] [30.002519428s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:33592 - 49771 "HINFO IN 2256637506804474767.8792056235808915989. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009355939s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [229e3f712b81762e37afbc46833638c0675d1021f616c0247bebce064605fdf2] <==
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43317 - 10702 "HINFO IN 6865145529338662859.4907811133070578253. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011610026s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[2065321125]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:11:18.882) (total time: 10001ms):
	Trace[2065321125]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (01:11:28.883)
	Trace[2065321125]: [10.00111154s] [10.00111154s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[54349611]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:11:18.882) (total time: 10001ms):
	Trace[54349611]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (01:11:28.884)
	Trace[54349611]: [10.001621347s] [10.001621347s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1017594132]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-Apr-2024 01:11:18.881) (total time: 10002ms):
	Trace[1017594132]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (01:11:28.884)
	Trace[1017594132]: [10.002512237s] [10.002512237s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-680144
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-680144
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=pause-680144
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T01_10_06_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 01:10:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-680144
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:11:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 01:11:34 +0000   Sat, 20 Apr 2024 01:10:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 01:11:34 +0000   Sat, 20 Apr 2024 01:10:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 01:11:34 +0000   Sat, 20 Apr 2024 01:10:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 01:11:34 +0000   Sat, 20 Apr 2024 01:10:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.180
	  Hostname:    pause-680144
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f85af7a15d24afaa34d3485bf416ee0
	  System UUID:                9f85af7a-15d2-4afa-a34d-3485bf416ee0
	  Boot ID:                    67e5bc76-3f7d-4666-885d-621a6b4231c3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-n2xqv                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     98s
	  kube-system                 etcd-pause-680144                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         111s
	  kube-system                 kube-apiserver-pause-680144             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-controller-manager-pause-680144    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-proxy-jndg6                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-scheduler-pause-680144             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 95s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeHasSufficientPID     111s               kubelet          Node pause-680144 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  111s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node pause-680144 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node pause-680144 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  NodeReady                110s               kubelet          Node pause-680144 status is now: NodeReady
	  Normal  RegisteredNode           99s                node-controller  Node pause-680144 event: Registered Node pause-680144 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-680144 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-680144 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-680144 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                node-controller  Node pause-680144 event: Registered Node pause-680144 in Controller
	
	
	==> dmesg <==
	[  +0.062956] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073117] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.159699] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.164963] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.329775] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +5.065680] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.066439] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.231807] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.771077] kauditd_printk_skb: 54 callbacks suppressed
	[Apr20 01:10] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.088040] kauditd_printk_skb: 33 callbacks suppressed
	[ +12.915515] systemd-fstab-generator[1491]: Ignoring "noauto" option for root device
	[  +0.134208] kauditd_printk_skb: 21 callbacks suppressed
	[ +40.613342] kauditd_printk_skb: 96 callbacks suppressed
	[Apr20 01:11] systemd-fstab-generator[2359]: Ignoring "noauto" option for root device
	[  +0.147752] systemd-fstab-generator[2371]: Ignoring "noauto" option for root device
	[  +0.188827] systemd-fstab-generator[2385]: Ignoring "noauto" option for root device
	[  +0.153505] systemd-fstab-generator[2397]: Ignoring "noauto" option for root device
	[  +0.329143] systemd-fstab-generator[2425]: Ignoring "noauto" option for root device
	[  +7.517481] systemd-fstab-generator[2552]: Ignoring "noauto" option for root device
	[  +0.086001] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.485673] kauditd_printk_skb: 87 callbacks suppressed
	[  +1.257561] systemd-fstab-generator[3277]: Ignoring "noauto" option for root device
	[  +4.378918] kauditd_printk_skb: 38 callbacks suppressed
	[ +13.935605] systemd-fstab-generator[3629]: Ignoring "noauto" option for root device
	
	
	==> etcd [2d5a96f57a7369e2f12aeed9290333c2de48d98ad0fa87296ed60d6d23892d5b] <==
	{"level":"info","ts":"2024-04-20T01:11:18.19214Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"46.600899ms"}
	{"level":"info","ts":"2024-04-20T01:11:18.289505Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-20T01:11:18.303404Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"1bb44bc72743d07d","local-member-id":"a1d4aad7c74b318","commit-index":422}
	{"level":"info","ts":"2024-04-20T01:11:18.303541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-20T01:11:18.303596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became follower at term 2"}
	{"level":"info","ts":"2024-04-20T01:11:18.30361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft a1d4aad7c74b318 [peers: [], term: 2, commit: 422, applied: 0, lastindex: 422, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-20T01:11:18.328555Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-20T01:11:18.371357Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":404}
	{"level":"info","ts":"2024-04-20T01:11:18.385729Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-20T01:11:18.396934Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"a1d4aad7c74b318","timeout":"7s"}
	{"level":"info","ts":"2024-04-20T01:11:18.397166Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"a1d4aad7c74b318"}
	{"level":"info","ts":"2024-04-20T01:11:18.397264Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"a1d4aad7c74b318","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-20T01:11:18.397504Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-20T01:11:18.397617Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T01:11:18.397645Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T01:11:18.397659Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T01:11:18.433853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 switched to configuration voters=(728820823681708824)"}
	{"level":"info","ts":"2024-04-20T01:11:18.43392Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1bb44bc72743d07d","local-member-id":"a1d4aad7c74b318","added-peer-id":"a1d4aad7c74b318","added-peer-peer-urls":["https://192.168.72.180:2380"]}
	{"level":"info","ts":"2024-04-20T01:11:18.434025Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1bb44bc72743d07d","local-member-id":"a1d4aad7c74b318","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:11:18.434052Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:11:18.480995Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-20T01:11:18.482608Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.180:2380"}
	{"level":"info","ts":"2024-04-20T01:11:18.487703Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.180:2380"}
	{"level":"info","ts":"2024-04-20T01:11:18.487929Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a1d4aad7c74b318","initial-advertise-peer-urls":["https://192.168.72.180:2380"],"listen-peer-urls":["https://192.168.72.180:2380"],"advertise-client-urls":["https://192.168.72.180:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.180:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-20T01:11:18.487986Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [a7079c73478b7fdcc2ac6783b2f1823f580d6c983713f09e3474517f167559b1] <==
	{"level":"info","ts":"2024-04-20T01:11:31.581905Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1bb44bc72743d07d","local-member-id":"a1d4aad7c74b318","added-peer-id":"a1d4aad7c74b318","added-peer-peer-urls":["https://192.168.72.180:2380"]}
	{"level":"info","ts":"2024-04-20T01:11:31.582019Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1bb44bc72743d07d","local-member-id":"a1d4aad7c74b318","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:11:31.582066Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:11:31.586592Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-20T01:11:31.586712Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.180:2380"}
	{"level":"info","ts":"2024-04-20T01:11:31.58689Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.180:2380"}
	{"level":"info","ts":"2024-04-20T01:11:31.588626Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-20T01:11:31.588555Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a1d4aad7c74b318","initial-advertise-peer-urls":["https://192.168.72.180:2380"],"listen-peer-urls":["https://192.168.72.180:2380"],"advertise-client-urls":["https://192.168.72.180:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.180:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-20T01:11:32.767729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-20T01:11:32.767815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-20T01:11:32.767847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 received MsgPreVoteResp from a1d4aad7c74b318 at term 2"}
	{"level":"info","ts":"2024-04-20T01:11:32.767874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became candidate at term 3"}
	{"level":"info","ts":"2024-04-20T01:11:32.76788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 received MsgVoteResp from a1d4aad7c74b318 at term 3"}
	{"level":"info","ts":"2024-04-20T01:11:32.767888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became leader at term 3"}
	{"level":"info","ts":"2024-04-20T01:11:32.767897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a1d4aad7c74b318 elected leader a1d4aad7c74b318 at term 3"}
	{"level":"info","ts":"2024-04-20T01:11:32.775105Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a1d4aad7c74b318","local-member-attributes":"{Name:pause-680144 ClientURLs:[https://192.168.72.180:2379]}","request-path":"/0/members/a1d4aad7c74b318/attributes","cluster-id":"1bb44bc72743d07d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-20T01:11:32.775163Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:11:32.775709Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:11:32.775818Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-20T01:11:32.775881Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T01:11:32.777532Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-20T01:11:32.779532Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.180:2379"}
	{"level":"warn","ts":"2024-04-20T01:11:35.085072Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.813872ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12905221932579286935 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-jndg6\" mod_revision:373 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-jndg6\" value_size:4638 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-jndg6\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-20T01:11:35.085185Z","caller":"traceutil/trace.go:171","msg":"trace[1398383649] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"311.494929ms","start":"2024-04-20T01:11:34.773673Z","end":"2024-04-20T01:11:35.085168Z","steps":["trace[1398383649] 'process raft request'  (duration: 156.98102ms)","trace[1398383649] 'compare'  (duration: 153.702608ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-20T01:11:35.085308Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:11:34.773659Z","time spent":"311.619097ms","remote":"127.0.0.1:43562","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4689,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-jndg6\" mod_revision:373 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-jndg6\" value_size:4638 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-jndg6\" > >"}
	
	
	==> kernel <==
	 01:11:56 up 2 min,  0 users,  load average: 0.60, 0.31, 0.12
	Linux pause-680144 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [57b091fcd81d20aa572548f07f59b0e8e0a54d0e3fa4b0f484b9324efb57918d] <==
	I0420 01:11:34.246724       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0420 01:11:34.355872       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0420 01:11:34.356157       1 policy_source.go:224] refreshing policies
	I0420 01:11:34.370180       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0420 01:11:34.377618       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0420 01:11:34.387749       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0420 01:11:34.388865       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0420 01:11:34.388923       1 shared_informer.go:320] Caches are synced for configmaps
	I0420 01:11:34.388970       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0420 01:11:34.388976       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0420 01:11:34.400858       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0420 01:11:34.401412       1 aggregator.go:165] initial CRD sync complete...
	I0420 01:11:34.401597       1 autoregister_controller.go:141] Starting autoregister controller
	I0420 01:11:34.401640       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0420 01:11:34.401669       1 cache.go:39] Caches are synced for autoregister controller
	I0420 01:11:34.423594       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0420 01:11:34.449353       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0420 01:11:35.183282       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0420 01:11:35.827738       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0420 01:11:35.843012       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0420 01:11:35.879810       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0420 01:11:35.921544       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0420 01:11:35.932998       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0420 01:11:46.703003       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0420 01:11:46.980128       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [b10d8cec2caed1534aac8ce051bbc0f03ecaab1265f0ac3e1d754dc8d96061af] <==
	I0420 01:11:17.922949       1 options.go:221] external host was not specified, using 192.168.72.180
	I0420 01:11:17.924304       1 server.go:148] Version: v1.30.0
	I0420 01:11:17.924360       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:11:18.822418       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0420 01:11:18.824915       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0420 01:11:18.825442       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0420 01:11:18.825608       1 instance.go:299] Using reconciler: lease
	I0420 01:11:18.824940       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0420 01:11:18.827450       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:18.827540       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:18.827626       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:19.828820       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:19.828921       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:19.829106       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:21.127387       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:21.372633       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:21.715962       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:24.042038       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:24.060482       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:24.158542       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:27.790934       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:28.144956       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:11:28.672076       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [12f76e07e9fd0a694df3fa853e27f3c0ebf6d407cfbb7593d46be1cbd277cb2c] <==
	I0420 01:11:18.931911       1 serving.go:380] Generated self-signed cert in-memory
	I0420 01:11:19.179036       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0420 01:11:19.179156       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:11:19.181052       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0420 01:11:19.181339       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0420 01:11:19.181416       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0420 01:11:19.182039       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [3c171416879a3f9cd8e9cdba12d01d577e522a14cde4e31e0d4b64e7b8d4a553] <==
	I0420 01:11:46.709472       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0420 01:11:46.709549       1 shared_informer.go:320] Caches are synced for disruption
	I0420 01:11:46.709594       1 shared_informer.go:320] Caches are synced for ephemeral
	I0420 01:11:46.711030       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0420 01:11:46.715331       1 shared_informer.go:320] Caches are synced for deployment
	I0420 01:11:46.718522       1 shared_informer.go:320] Caches are synced for node
	I0420 01:11:46.718602       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0420 01:11:46.718650       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0420 01:11:46.718679       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0420 01:11:46.718701       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0420 01:11:46.720335       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0420 01:11:46.723136       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0420 01:11:46.729449       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0420 01:11:46.755786       1 shared_informer.go:320] Caches are synced for stateful set
	I0420 01:11:46.767291       1 shared_informer.go:320] Caches are synced for HPA
	I0420 01:11:46.785318       1 shared_informer.go:320] Caches are synced for persistent volume
	I0420 01:11:46.789425       1 shared_informer.go:320] Caches are synced for PV protection
	I0420 01:11:46.815873       1 shared_informer.go:320] Caches are synced for attach detach
	I0420 01:11:46.914097       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0420 01:11:46.933506       1 shared_informer.go:320] Caches are synced for resource quota
	I0420 01:11:46.942301       1 shared_informer.go:320] Caches are synced for resource quota
	I0420 01:11:46.969324       1 shared_informer.go:320] Caches are synced for endpoint
	I0420 01:11:47.350434       1 shared_informer.go:320] Caches are synced for garbage collector
	I0420 01:11:47.350563       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0420 01:11:47.388460       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [59845b36c60dcd90fbf451ae7f951c66655e974a6754e09549e907c0f0209176] <==
	I0420 01:10:20.213351       1 server_linux.go:69] "Using iptables proxy"
	I0420 01:10:20.226755       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.180"]
	I0420 01:10:20.284850       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 01:10:20.284922       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 01:10:20.284941       1 server_linux.go:165] "Using iptables Proxier"
	I0420 01:10:20.288007       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 01:10:20.288189       1 server.go:872] "Version info" version="v1.30.0"
	I0420 01:10:20.288285       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:10:20.289313       1 config.go:192] "Starting service config controller"
	I0420 01:10:20.289458       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 01:10:20.289486       1 config.go:101] "Starting endpoint slice config controller"
	I0420 01:10:20.289489       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 01:10:20.290047       1 config.go:319] "Starting node config controller"
	I0420 01:10:20.290086       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 01:10:20.390571       1 shared_informer.go:320] Caches are synced for node config
	I0420 01:10:20.390658       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 01:10:20.390620       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [c78e339dfe36773be10f8568132cffb783135d3f41e884a520c3c234fe4fc8e6] <==
	I0420 01:11:18.919831       1 server_linux.go:69] "Using iptables proxy"
	E0420 01:11:29.899191       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-680144\": dial tcp 192.168.72.180:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.72.180:52516->192.168.72.180:8443: read: connection reset by peer"
	E0420 01:11:30.917134       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-680144\": dial tcp 192.168.72.180:8443: connect: connection refused"
	I0420 01:11:34.427176       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.180"]
	I0420 01:11:34.497768       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 01:11:34.497852       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 01:11:34.497873       1 server_linux.go:165] "Using iptables Proxier"
	I0420 01:11:34.501144       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 01:11:34.501675       1 server.go:872] "Version info" version="v1.30.0"
	I0420 01:11:34.501724       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:11:34.503912       1 config.go:192] "Starting service config controller"
	I0420 01:11:34.503955       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 01:11:34.503981       1 config.go:101] "Starting endpoint slice config controller"
	I0420 01:11:34.503985       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 01:11:34.505903       1 config.go:319] "Starting node config controller"
	I0420 01:11:34.508834       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 01:11:34.604310       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 01:11:34.604460       1 shared_informer.go:320] Caches are synced for service config
	I0420 01:11:34.613360       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2ce651de3c308355c6e3bc9b00a084e29c5ec8675ac179ab77f2039a1ce31980] <==
	
	
	==> kube-scheduler [58f6fedf5a80c775d4a0d66196e9e813c9f09c14c904c68a15172607ecc890d0] <==
	I0420 01:11:32.280821       1 serving.go:380] Generated self-signed cert in-memory
	W0420 01:11:34.296736       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0420 01:11:34.296805       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 01:11:34.296817       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0420 01:11:34.296828       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0420 01:11:34.360176       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0420 01:11:34.364293       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:11:34.374124       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0420 01:11:34.374441       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0420 01:11:34.374548       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0420 01:11:34.374604       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0420 01:11:34.475808       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 20 01:11:30 pause-680144 kubelet[3284]: I0420 01:11:30.830040    3284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/794c4a62a0b10913407c7946e3fa7672-ca-certs\") pod \"kube-apiserver-pause-680144\" (UID: \"794c4a62a0b10913407c7946e3fa7672\") " pod="kube-system/kube-apiserver-pause-680144"
	Apr 20 01:11:30 pause-680144 kubelet[3284]: I0420 01:11:30.830056    3284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c7753d065e8b91b151adc80443b939d-ca-certs\") pod \"kube-controller-manager-pause-680144\" (UID: \"6c7753d065e8b91b151adc80443b939d\") " pod="kube-system/kube-controller-manager-pause-680144"
	Apr 20 01:11:30 pause-680144 kubelet[3284]: E0420 01:11:30.830363    3284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-680144?timeout=10s\": dial tcp 192.168.72.180:8443: connect: connection refused" interval="400ms"
	Apr 20 01:11:30 pause-680144 kubelet[3284]: I0420 01:11:30.925391    3284 kubelet_node_status.go:73] "Attempting to register node" node="pause-680144"
	Apr 20 01:11:30 pause-680144 kubelet[3284]: E0420 01:11:30.926462    3284 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.180:8443: connect: connection refused" node="pause-680144"
	Apr 20 01:11:31 pause-680144 kubelet[3284]: I0420 01:11:31.106890    3284 scope.go:117] "RemoveContainer" containerID="2d5a96f57a7369e2f12aeed9290333c2de48d98ad0fa87296ed60d6d23892d5b"
	Apr 20 01:11:31 pause-680144 kubelet[3284]: I0420 01:11:31.109698    3284 scope.go:117] "RemoveContainer" containerID="b10d8cec2caed1534aac8ce051bbc0f03ecaab1265f0ac3e1d754dc8d96061af"
	Apr 20 01:11:31 pause-680144 kubelet[3284]: I0420 01:11:31.110959    3284 scope.go:117] "RemoveContainer" containerID="12f76e07e9fd0a694df3fa853e27f3c0ebf6d407cfbb7593d46be1cbd277cb2c"
	Apr 20 01:11:31 pause-680144 kubelet[3284]: I0420 01:11:31.111440    3284 scope.go:117] "RemoveContainer" containerID="2ce651de3c308355c6e3bc9b00a084e29c5ec8675ac179ab77f2039a1ce31980"
	Apr 20 01:11:31 pause-680144 kubelet[3284]: E0420 01:11:31.232580    3284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-680144?timeout=10s\": dial tcp 192.168.72.180:8443: connect: connection refused" interval="800ms"
	Apr 20 01:11:31 pause-680144 kubelet[3284]: I0420 01:11:31.327991    3284 kubelet_node_status.go:73] "Attempting to register node" node="pause-680144"
	Apr 20 01:11:31 pause-680144 kubelet[3284]: E0420 01:11:31.329043    3284 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.180:8443: connect: connection refused" node="pause-680144"
	Apr 20 01:11:31 pause-680144 kubelet[3284]: W0420 01:11:31.460187    3284 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-680144&limit=500&resourceVersion=0": dial tcp 192.168.72.180:8443: connect: connection refused
	Apr 20 01:11:31 pause-680144 kubelet[3284]: E0420 01:11:31.460538    3284 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-680144&limit=500&resourceVersion=0": dial tcp 192.168.72.180:8443: connect: connection refused
	Apr 20 01:11:32 pause-680144 kubelet[3284]: I0420 01:11:32.132103    3284 kubelet_node_status.go:73] "Attempting to register node" node="pause-680144"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.443628    3284 kubelet_node_status.go:112] "Node was previously registered" node="pause-680144"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.444107    3284 kubelet_node_status.go:76] "Successfully registered node" node="pause-680144"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.446567    3284 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.448099    3284 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.605383    3284 apiserver.go:52] "Watching apiserver"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.610619    3284 topology_manager.go:215] "Topology Admit Handler" podUID="35b4f9fc-975d-4d43-ade9-2818e1771e07" podNamespace="kube-system" podName="coredns-7db6d8ff4d-n2xqv"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.612352    3284 topology_manager.go:215] "Topology Admit Handler" podUID="45a3f948-63b9-45b8-961e-b1b573c2b862" podNamespace="kube-system" podName="kube-proxy-jndg6"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.620878    3284 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.715499    3284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45a3f948-63b9-45b8-961e-b1b573c2b862-xtables-lock\") pod \"kube-proxy-jndg6\" (UID: \"45a3f948-63b9-45b8-961e-b1b573c2b862\") " pod="kube-system/kube-proxy-jndg6"
	Apr 20 01:11:34 pause-680144 kubelet[3284]: I0420 01:11:34.715619    3284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45a3f948-63b9-45b8-961e-b1b573c2b862-lib-modules\") pod \"kube-proxy-jndg6\" (UID: \"45a3f948-63b9-45b8-961e-b1b573c2b862\") " pod="kube-system/kube-proxy-jndg6"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-680144 -n pause-680144
helpers_test.go:261: (dbg) Run:  kubectl --context pause-680144 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (56.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (327.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-564860 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-564860 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m27.395553728s)

                                                
                                                
-- stdout --
	* [old-k8s-version-564860] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-564860" primary control-plane node in "old-k8s-version-564860" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 01:14:27.698927  134009 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:14:27.699197  134009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:14:27.699206  134009 out.go:304] Setting ErrFile to fd 2...
	I0420 01:14:27.699211  134009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:14:27.699394  134009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:14:27.699985  134009 out.go:298] Setting JSON to false
	I0420 01:14:27.701049  134009 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14215,"bootTime":1713561453,"procs":349,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 01:14:27.701113  134009 start.go:139] virtualization: kvm guest
	I0420 01:14:27.703567  134009 out.go:177] * [old-k8s-version-564860] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 01:14:27.705181  134009 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:14:27.706716  134009 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:14:27.705141  134009 notify.go:220] Checking for updates...
	I0420 01:14:27.708213  134009 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:14:27.709597  134009 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:14:27.710955  134009 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 01:14:27.712320  134009 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:14:27.714054  134009 config.go:182] Loaded profile config "calico-831611": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:14:27.714153  134009 config.go:182] Loaded profile config "custom-flannel-831611": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:14:27.714225  134009 config.go:182] Loaded profile config "flannel-831611": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:14:27.714310  134009 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:14:27.752898  134009 out.go:177] * Using the kvm2 driver based on user configuration
	I0420 01:14:27.754160  134009 start.go:297] selected driver: kvm2
	I0420 01:14:27.754180  134009 start.go:901] validating driver "kvm2" against <nil>
	I0420 01:14:27.754204  134009 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:14:27.755234  134009 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:14:27.755349  134009 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 01:14:27.772308  134009 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 01:14:27.772352  134009 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0420 01:14:27.772542  134009 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:14:27.772598  134009 cni.go:84] Creating CNI manager for ""
	I0420 01:14:27.772609  134009 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:14:27.772621  134009 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0420 01:14:27.772673  134009 start.go:340] cluster config:
	{Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:14:27.772796  134009 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:14:27.774516  134009 out.go:177] * Starting "old-k8s-version-564860" primary control-plane node in "old-k8s-version-564860" cluster
	I0420 01:14:27.775713  134009 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:14:27.775752  134009 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0420 01:14:27.775766  134009 cache.go:56] Caching tarball of preloaded images
	I0420 01:14:27.775848  134009 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 01:14:27.775861  134009 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0420 01:14:27.775979  134009 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/config.json ...
	I0420 01:14:27.776007  134009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/config.json: {Name:mk40da66360b6817556d69b4da034d7fea18ad57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:14:27.776152  134009 start.go:360] acquireMachinesLock for old-k8s-version-564860: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:15:13.355820  134009 start.go:364] duration metric: took 45.579619767s to acquireMachinesLock for "old-k8s-version-564860"
	I0420 01:15:13.355883  134009 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-564860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:15:13.356088  134009 start.go:125] createHost starting for "" (driver="kvm2")
	I0420 01:15:13.359120  134009 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0420 01:15:13.359379  134009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:15:13.359430  134009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:15:13.378782  134009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34457
	I0420 01:15:13.379249  134009 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:15:13.379807  134009 main.go:141] libmachine: Using API Version  1
	I0420 01:15:13.379846  134009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:15:13.380199  134009 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:15:13.380386  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:15:13.380538  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:15:13.380691  134009 start.go:159] libmachine.API.Create for "old-k8s-version-564860" (driver="kvm2")
	I0420 01:15:13.380721  134009 client.go:168] LocalClient.Create starting
	I0420 01:15:13.380758  134009 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem
	I0420 01:15:13.380793  134009 main.go:141] libmachine: Decoding PEM data...
	I0420 01:15:13.380821  134009 main.go:141] libmachine: Parsing certificate...
	I0420 01:15:13.380904  134009 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem
	I0420 01:15:13.380928  134009 main.go:141] libmachine: Decoding PEM data...
	I0420 01:15:13.380947  134009 main.go:141] libmachine: Parsing certificate...
	I0420 01:15:13.380977  134009 main.go:141] libmachine: Running pre-create checks...
	I0420 01:15:13.380991  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .PreCreateCheck
	I0420 01:15:13.381290  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetConfigRaw
	I0420 01:15:13.381704  134009 main.go:141] libmachine: Creating machine...
	I0420 01:15:13.381724  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .Create
	I0420 01:15:13.381902  134009 main.go:141] libmachine: (old-k8s-version-564860) Creating KVM machine...
	I0420 01:15:13.383187  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found existing default KVM network
	I0420 01:15:13.384491  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:13.384330  135812 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:25:52:c1} reservation:<nil>}
	I0420 01:15:13.385354  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:13.385238  135812 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:1e:a2:f1} reservation:<nil>}
	I0420 01:15:13.386538  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:13.386441  135812 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002ae620}
	I0420 01:15:13.386562  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | created network xml: 
	I0420 01:15:13.386574  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | <network>
	I0420 01:15:13.386588  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG |   <name>mk-old-k8s-version-564860</name>
	I0420 01:15:13.386599  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG |   <dns enable='no'/>
	I0420 01:15:13.386606  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG |   
	I0420 01:15:13.386619  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0420 01:15:13.386634  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG |     <dhcp>
	I0420 01:15:13.386648  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0420 01:15:13.386660  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG |     </dhcp>
	I0420 01:15:13.386687  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG |   </ip>
	I0420 01:15:13.386710  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG |   
	I0420 01:15:13.386723  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | </network>
	I0420 01:15:13.386735  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | 
	I0420 01:15:13.392371  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | trying to create private KVM network mk-old-k8s-version-564860 192.168.61.0/24...
	I0420 01:15:13.473890  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | private KVM network mk-old-k8s-version-564860 192.168.61.0/24 created
	I0420 01:15:13.473921  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:13.473847  135812 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:15:13.473934  134009 main.go:141] libmachine: (old-k8s-version-564860) Setting up store path in /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860 ...
	I0420 01:15:13.473971  134009 main.go:141] libmachine: (old-k8s-version-564860) Building disk image from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0420 01:15:13.473988  134009 main.go:141] libmachine: (old-k8s-version-564860) Downloading /home/jenkins/minikube-integration/18703-76456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0420 01:15:13.796165  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:13.796033  135812 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa...
	I0420 01:15:13.945445  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:13.945236  135812 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/old-k8s-version-564860.rawdisk...
	I0420 01:15:13.945486  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | Writing magic tar header
	I0420 01:15:13.945507  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | Writing SSH key tar header
	I0420 01:15:13.945521  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:13.945399  135812 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860 ...
	I0420 01:15:13.945544  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860
	I0420 01:15:13.945642  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines
	I0420 01:15:13.945671  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:15:13.945688  134009 main.go:141] libmachine: (old-k8s-version-564860) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860 (perms=drwx------)
	I0420 01:15:13.945724  134009 main.go:141] libmachine: (old-k8s-version-564860) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines (perms=drwxr-xr-x)
	I0420 01:15:13.945739  134009 main.go:141] libmachine: (old-k8s-version-564860) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube (perms=drwxr-xr-x)
	I0420 01:15:13.945754  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456
	I0420 01:15:13.945780  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0420 01:15:13.945799  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | Checking permissions on dir: /home/jenkins
	I0420 01:15:13.945832  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | Checking permissions on dir: /home
	I0420 01:15:13.945862  134009 main.go:141] libmachine: (old-k8s-version-564860) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456 (perms=drwxrwxr-x)
	I0420 01:15:13.945876  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | Skipping /home - not owner
	I0420 01:15:13.945897  134009 main.go:141] libmachine: (old-k8s-version-564860) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0420 01:15:13.945918  134009 main.go:141] libmachine: (old-k8s-version-564860) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0420 01:15:13.945934  134009 main.go:141] libmachine: (old-k8s-version-564860) Creating domain...
	I0420 01:15:13.947321  134009 main.go:141] libmachine: (old-k8s-version-564860) define libvirt domain using xml: 
	I0420 01:15:13.947344  134009 main.go:141] libmachine: (old-k8s-version-564860) <domain type='kvm'>
	I0420 01:15:13.947355  134009 main.go:141] libmachine: (old-k8s-version-564860)   <name>old-k8s-version-564860</name>
	I0420 01:15:13.947364  134009 main.go:141] libmachine: (old-k8s-version-564860)   <memory unit='MiB'>2200</memory>
	I0420 01:15:13.947373  134009 main.go:141] libmachine: (old-k8s-version-564860)   <vcpu>2</vcpu>
	I0420 01:15:13.947386  134009 main.go:141] libmachine: (old-k8s-version-564860)   <features>
	I0420 01:15:13.947395  134009 main.go:141] libmachine: (old-k8s-version-564860)     <acpi/>
	I0420 01:15:13.947416  134009 main.go:141] libmachine: (old-k8s-version-564860)     <apic/>
	I0420 01:15:13.947424  134009 main.go:141] libmachine: (old-k8s-version-564860)     <pae/>
	I0420 01:15:13.947444  134009 main.go:141] libmachine: (old-k8s-version-564860)     
	I0420 01:15:13.947456  134009 main.go:141] libmachine: (old-k8s-version-564860)   </features>
	I0420 01:15:13.947464  134009 main.go:141] libmachine: (old-k8s-version-564860)   <cpu mode='host-passthrough'>
	I0420 01:15:13.947478  134009 main.go:141] libmachine: (old-k8s-version-564860)   
	I0420 01:15:13.947485  134009 main.go:141] libmachine: (old-k8s-version-564860)   </cpu>
	I0420 01:15:13.947496  134009 main.go:141] libmachine: (old-k8s-version-564860)   <os>
	I0420 01:15:13.947508  134009 main.go:141] libmachine: (old-k8s-version-564860)     <type>hvm</type>
	I0420 01:15:13.947516  134009 main.go:141] libmachine: (old-k8s-version-564860)     <boot dev='cdrom'/>
	I0420 01:15:13.947527  134009 main.go:141] libmachine: (old-k8s-version-564860)     <boot dev='hd'/>
	I0420 01:15:13.947537  134009 main.go:141] libmachine: (old-k8s-version-564860)     <bootmenu enable='no'/>
	I0420 01:15:13.947547  134009 main.go:141] libmachine: (old-k8s-version-564860)   </os>
	I0420 01:15:13.947555  134009 main.go:141] libmachine: (old-k8s-version-564860)   <devices>
	I0420 01:15:13.947565  134009 main.go:141] libmachine: (old-k8s-version-564860)     <disk type='file' device='cdrom'>
	I0420 01:15:13.947579  134009 main.go:141] libmachine: (old-k8s-version-564860)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/boot2docker.iso'/>
	I0420 01:15:13.947590  134009 main.go:141] libmachine: (old-k8s-version-564860)       <target dev='hdc' bus='scsi'/>
	I0420 01:15:13.947602  134009 main.go:141] libmachine: (old-k8s-version-564860)       <readonly/>
	I0420 01:15:13.947610  134009 main.go:141] libmachine: (old-k8s-version-564860)     </disk>
	I0420 01:15:13.947622  134009 main.go:141] libmachine: (old-k8s-version-564860)     <disk type='file' device='disk'>
	I0420 01:15:13.947635  134009 main.go:141] libmachine: (old-k8s-version-564860)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0420 01:15:13.947655  134009 main.go:141] libmachine: (old-k8s-version-564860)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/old-k8s-version-564860.rawdisk'/>
	I0420 01:15:13.947668  134009 main.go:141] libmachine: (old-k8s-version-564860)       <target dev='hda' bus='virtio'/>
	I0420 01:15:13.947678  134009 main.go:141] libmachine: (old-k8s-version-564860)     </disk>
	I0420 01:15:13.947693  134009 main.go:141] libmachine: (old-k8s-version-564860)     <interface type='network'>
	I0420 01:15:13.947706  134009 main.go:141] libmachine: (old-k8s-version-564860)       <source network='mk-old-k8s-version-564860'/>
	I0420 01:15:13.947717  134009 main.go:141] libmachine: (old-k8s-version-564860)       <model type='virtio'/>
	I0420 01:15:13.947729  134009 main.go:141] libmachine: (old-k8s-version-564860)     </interface>
	I0420 01:15:13.947741  134009 main.go:141] libmachine: (old-k8s-version-564860)     <interface type='network'>
	I0420 01:15:13.947754  134009 main.go:141] libmachine: (old-k8s-version-564860)       <source network='default'/>
	I0420 01:15:13.947762  134009 main.go:141] libmachine: (old-k8s-version-564860)       <model type='virtio'/>
	I0420 01:15:13.947773  134009 main.go:141] libmachine: (old-k8s-version-564860)     </interface>
	I0420 01:15:13.947785  134009 main.go:141] libmachine: (old-k8s-version-564860)     <serial type='pty'>
	I0420 01:15:13.947799  134009 main.go:141] libmachine: (old-k8s-version-564860)       <target port='0'/>
	I0420 01:15:13.947809  134009 main.go:141] libmachine: (old-k8s-version-564860)     </serial>
	I0420 01:15:13.947823  134009 main.go:141] libmachine: (old-k8s-version-564860)     <console type='pty'>
	I0420 01:15:13.947835  134009 main.go:141] libmachine: (old-k8s-version-564860)       <target type='serial' port='0'/>
	I0420 01:15:13.947847  134009 main.go:141] libmachine: (old-k8s-version-564860)     </console>
	I0420 01:15:13.947867  134009 main.go:141] libmachine: (old-k8s-version-564860)     <rng model='virtio'>
	I0420 01:15:13.947881  134009 main.go:141] libmachine: (old-k8s-version-564860)       <backend model='random'>/dev/random</backend>
	I0420 01:15:13.947894  134009 main.go:141] libmachine: (old-k8s-version-564860)     </rng>
	I0420 01:15:13.947906  134009 main.go:141] libmachine: (old-k8s-version-564860)     
	I0420 01:15:13.947918  134009 main.go:141] libmachine: (old-k8s-version-564860)     
	I0420 01:15:13.947929  134009 main.go:141] libmachine: (old-k8s-version-564860)   </devices>
	I0420 01:15:13.947938  134009 main.go:141] libmachine: (old-k8s-version-564860) </domain>
	I0420 01:15:13.947950  134009 main.go:141] libmachine: (old-k8s-version-564860) 
	I0420 01:15:13.952242  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:7a:dc:35 in network default
	I0420 01:15:13.953159  134009 main.go:141] libmachine: (old-k8s-version-564860) Ensuring networks are active...
	I0420 01:15:13.953194  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:13.954198  134009 main.go:141] libmachine: (old-k8s-version-564860) Ensuring network default is active
	I0420 01:15:13.954619  134009 main.go:141] libmachine: (old-k8s-version-564860) Ensuring network mk-old-k8s-version-564860 is active
	I0420 01:15:13.955305  134009 main.go:141] libmachine: (old-k8s-version-564860) Getting domain xml...
	I0420 01:15:13.956524  134009 main.go:141] libmachine: (old-k8s-version-564860) Creating domain...
	I0420 01:15:15.423993  134009 main.go:141] libmachine: (old-k8s-version-564860) Waiting to get IP...
	I0420 01:15:15.424935  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:15.425454  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:15:15.425481  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:15.425439  135812 retry.go:31] will retry after 276.494348ms: waiting for machine to come up
	I0420 01:15:15.704086  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:15.706906  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:15:15.706933  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:15.706774  135812 retry.go:31] will retry after 238.273522ms: waiting for machine to come up
	I0420 01:15:15.946375  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:15.947054  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:15:15.947087  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:15.947003  135812 retry.go:31] will retry after 416.009171ms: waiting for machine to come up
	I0420 01:15:16.364947  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:16.365576  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:15:16.365610  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:16.365531  135812 retry.go:31] will retry after 432.493828ms: waiting for machine to come up
	I0420 01:15:16.799898  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:16.800675  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:15:16.800701  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:16.800587  135812 retry.go:31] will retry after 749.656308ms: waiting for machine to come up
	I0420 01:15:17.551675  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:17.552326  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:15:17.552354  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:17.552266  135812 retry.go:31] will retry after 790.941572ms: waiting for machine to come up
	I0420 01:15:18.346000  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:18.346614  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:15:18.346647  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:18.346559  135812 retry.go:31] will retry after 729.71766ms: waiting for machine to come up
	I0420 01:15:19.077588  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:19.078160  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:15:19.078189  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:19.078134  135812 retry.go:31] will retry after 1.171510633s: waiting for machine to come up
	I0420 01:15:20.251596  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:20.252145  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:15:20.252170  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:20.252103  135812 retry.go:31] will retry after 1.474656329s: waiting for machine to come up
	I0420 01:15:21.728938  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:21.729539  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:15:21.729562  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:21.729498  135812 retry.go:31] will retry after 2.062935072s: waiting for machine to come up
	I0420 01:15:23.793806  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:23.794216  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:15:23.794243  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:23.794179  135812 retry.go:31] will retry after 2.086584576s: waiting for machine to come up
	I0420 01:15:25.882616  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:25.883071  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:15:25.883095  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:25.883019  135812 retry.go:31] will retry after 3.211046666s: waiting for machine to come up
	I0420 01:15:29.095300  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:29.095690  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:15:29.095721  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:29.095651  135812 retry.go:31] will retry after 3.292557957s: waiting for machine to come up
	I0420 01:15:32.389462  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:32.389940  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:15:32.389965  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:32.389892  135812 retry.go:31] will retry after 3.942404826s: waiting for machine to come up
	I0420 01:15:36.334571  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:36.335096  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:15:36.335123  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:15:36.335041  135812 retry.go:31] will retry after 6.897730872s: waiting for machine to come up
	I0420 01:15:43.236477  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:43.236965  134009 main.go:141] libmachine: (old-k8s-version-564860) Found IP for machine: 192.168.61.91
	I0420 01:15:43.236995  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has current primary IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:43.237002  134009 main.go:141] libmachine: (old-k8s-version-564860) Reserving static IP address...
	I0420 01:15:43.237386  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-564860", mac: "52:54:00:9d:63:09", ip: "192.168.61.91"} in network mk-old-k8s-version-564860
	I0420 01:15:43.320184  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | Getting to WaitForSSH function...
	I0420 01:15:43.320214  134009 main.go:141] libmachine: (old-k8s-version-564860) Reserved static IP address: 192.168.61.91
	I0420 01:15:43.320229  134009 main.go:141] libmachine: (old-k8s-version-564860) Waiting for SSH to be available...
	I0420 01:15:43.323060  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:43.323534  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9d:63:09}
	I0420 01:15:43.323563  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:43.323715  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | Using SSH client type: external
	I0420 01:15:43.323737  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa (-rw-------)
	I0420 01:15:43.323778  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:15:43.323808  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | About to run SSH command:
	I0420 01:15:43.323822  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | exit 0
	I0420 01:15:43.449875  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | SSH cmd err, output: <nil>: 
	I0420 01:15:43.450207  134009 main.go:141] libmachine: (old-k8s-version-564860) KVM machine creation complete!
	I0420 01:15:43.450491  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetConfigRaw
	I0420 01:15:43.451156  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:15:43.451366  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:15:43.451533  134009 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0420 01:15:43.451555  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetState
	I0420 01:15:43.453097  134009 main.go:141] libmachine: Detecting operating system of created instance...
	I0420 01:15:43.453111  134009 main.go:141] libmachine: Waiting for SSH to be available...
	I0420 01:15:43.453117  134009 main.go:141] libmachine: Getting to WaitForSSH function...
	I0420 01:15:43.453122  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:15:43.455676  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:43.456115  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:15:43.456161  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:43.456229  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:15:43.456440  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:15:43.456633  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:15:43.456784  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:15:43.456970  134009 main.go:141] libmachine: Using SSH client type: native
	I0420 01:15:43.457176  134009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:15:43.457191  134009 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0420 01:15:43.561034  134009 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:15:43.561061  134009 main.go:141] libmachine: Detecting the provisioner...
	I0420 01:15:43.561082  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:15:43.564161  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:43.564572  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:15:43.564607  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:43.564858  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:15:43.565089  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:15:43.565285  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:15:43.565497  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:15:43.565708  134009 main.go:141] libmachine: Using SSH client type: native
	I0420 01:15:43.565888  134009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:15:43.565906  134009 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0420 01:15:43.670908  134009 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0420 01:15:43.670993  134009 main.go:141] libmachine: found compatible host: buildroot
	I0420 01:15:43.671005  134009 main.go:141] libmachine: Provisioning with buildroot...
	I0420 01:15:43.671024  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:15:43.671345  134009 buildroot.go:166] provisioning hostname "old-k8s-version-564860"
	I0420 01:15:43.671379  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:15:43.671565  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:15:43.674539  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:43.674906  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:15:43.674936  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:43.675207  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:15:43.675413  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:15:43.675602  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:15:43.675755  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:15:43.675907  134009 main.go:141] libmachine: Using SSH client type: native
	I0420 01:15:43.676097  134009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:15:43.676110  134009 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-564860 && echo "old-k8s-version-564860" | sudo tee /etc/hostname
	I0420 01:15:43.802142  134009 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-564860
	
	I0420 01:15:43.802177  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:15:43.805114  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:43.805537  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:15:43.805600  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:43.805789  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:15:43.806033  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:15:43.806240  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:15:43.806431  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:15:43.806635  134009 main.go:141] libmachine: Using SSH client type: native
	I0420 01:15:43.806857  134009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:15:43.806884  134009 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-564860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-564860/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-564860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:15:43.924687  134009 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:15:43.924730  134009 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:15:43.924758  134009 buildroot.go:174] setting up certificates
	I0420 01:15:43.924771  134009 provision.go:84] configureAuth start
	I0420 01:15:43.924808  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:15:43.925137  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:15:43.928254  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:43.928679  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:15:43.928718  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:43.928917  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:15:43.931330  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:43.931676  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:15:43.931708  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:43.931882  134009 provision.go:143] copyHostCerts
	I0420 01:15:43.931952  134009 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:15:43.931967  134009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:15:43.932022  134009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:15:43.932169  134009 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:15:43.932179  134009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:15:43.932202  134009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:15:43.932269  134009 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:15:43.932283  134009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:15:43.932300  134009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:15:43.932356  134009 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-564860 san=[127.0.0.1 192.168.61.91 localhost minikube old-k8s-version-564860]
	I0420 01:15:44.026191  134009 provision.go:177] copyRemoteCerts
	I0420 01:15:44.026271  134009 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:15:44.026309  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:15:44.028979  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.029432  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:15:44.029467  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.029682  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:15:44.029913  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:15:44.030088  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:15:44.030244  134009 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:15:44.112802  134009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:15:44.138733  134009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0420 01:15:44.167316  134009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 01:15:44.198047  134009 provision.go:87] duration metric: took 273.260565ms to configureAuth
	I0420 01:15:44.198075  134009 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:15:44.198281  134009 config.go:182] Loaded profile config "old-k8s-version-564860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:15:44.198363  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:15:44.201003  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.201406  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:15:44.201448  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.201670  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:15:44.201878  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:15:44.202050  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:15:44.202184  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:15:44.202327  134009 main.go:141] libmachine: Using SSH client type: native
	I0420 01:15:44.202484  134009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:15:44.202498  134009 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:15:44.493094  134009 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:15:44.493125  134009 main.go:141] libmachine: Checking connection to Docker...
	I0420 01:15:44.493134  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetURL
	I0420 01:15:44.494488  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | Using libvirt version 6000000
	I0420 01:15:44.496980  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.497394  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:15:44.497418  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.497559  134009 main.go:141] libmachine: Docker is up and running!
	I0420 01:15:44.497578  134009 main.go:141] libmachine: Reticulating splines...
	I0420 01:15:44.497587  134009 client.go:171] duration metric: took 31.116852894s to LocalClient.Create
	I0420 01:15:44.497610  134009 start.go:167] duration metric: took 31.116921302s to libmachine.API.Create "old-k8s-version-564860"
	I0420 01:15:44.497623  134009 start.go:293] postStartSetup for "old-k8s-version-564860" (driver="kvm2")
	I0420 01:15:44.497637  134009 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:15:44.497660  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:15:44.497937  134009 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:15:44.497963  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:15:44.500162  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.500483  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:15:44.500515  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.500643  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:15:44.500818  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:15:44.501030  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:15:44.501184  134009 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:15:44.585665  134009 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:15:44.591379  134009 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:15:44.591416  134009 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:15:44.591494  134009 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:15:44.591603  134009 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:15:44.591697  134009 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:15:44.602913  134009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:15:44.632316  134009 start.go:296] duration metric: took 134.671685ms for postStartSetup
	I0420 01:15:44.632426  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetConfigRaw
	I0420 01:15:44.633056  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:15:44.635951  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.636517  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:15:44.636548  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.636869  134009 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/config.json ...
	I0420 01:15:44.637101  134009 start.go:128] duration metric: took 31.280995391s to createHost
	I0420 01:15:44.637133  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:15:44.639445  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.639802  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:15:44.639831  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.639976  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:15:44.640207  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:15:44.640388  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:15:44.640556  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:15:44.640737  134009 main.go:141] libmachine: Using SSH client type: native
	I0420 01:15:44.640939  134009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:15:44.640954  134009 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0420 01:15:44.748439  134009 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713575744.735716978
	
	I0420 01:15:44.748473  134009 fix.go:216] guest clock: 1713575744.735716978
	I0420 01:15:44.748484  134009 fix.go:229] Guest: 2024-04-20 01:15:44.735716978 +0000 UTC Remote: 2024-04-20 01:15:44.637116763 +0000 UTC m=+76.992008872 (delta=98.600215ms)
	I0420 01:15:44.748514  134009 fix.go:200] guest clock delta is within tolerance: 98.600215ms
	I0420 01:15:44.748520  134009 start.go:83] releasing machines lock for "old-k8s-version-564860", held for 31.392672197s
	I0420 01:15:44.748555  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:15:44.748898  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:15:44.752116  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.752483  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:15:44.752514  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.752673  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:15:44.753322  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:15:44.753516  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:15:44.753627  134009 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:15:44.753671  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:15:44.753778  134009 ssh_runner.go:195] Run: cat /version.json
	I0420 01:15:44.753817  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:15:44.756879  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.757149  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.757258  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:15:44.757335  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.757483  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:15:44.757521  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:44.757862  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:15:44.757914  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:15:44.758052  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:15:44.758076  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:15:44.758255  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:15:44.758270  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:15:44.758386  134009 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:15:44.758555  134009 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:15:44.848543  134009 ssh_runner.go:195] Run: systemctl --version
	I0420 01:15:44.879337  134009 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:15:45.051149  134009 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:15:45.058931  134009 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:15:45.059008  134009 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:15:45.078710  134009 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:15:45.078742  134009 start.go:494] detecting cgroup driver to use...
	I0420 01:15:45.078841  134009 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:15:45.103420  134009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:15:45.123641  134009 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:15:45.123712  134009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:15:45.142277  134009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:15:45.159628  134009 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:15:45.302710  134009 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:15:45.504964  134009 docker.go:233] disabling docker service ...
	I0420 01:15:45.505042  134009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:15:45.527351  134009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:15:45.547168  134009 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:15:45.699219  134009 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:15:45.854482  134009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:15:45.871214  134009 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:15:45.893254  134009 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0420 01:15:45.893339  134009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:15:45.906602  134009 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:15:45.906679  134009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:15:45.923779  134009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:15:45.938441  134009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:15:45.952654  134009 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:15:45.967642  134009 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:15:45.981855  134009 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:15:45.981917  134009 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:15:46.012144  134009 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:15:46.024614  134009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:15:46.175174  134009 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:15:46.365022  134009 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:15:46.365098  134009 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:15:46.371878  134009 start.go:562] Will wait 60s for crictl version
	I0420 01:15:46.371959  134009 ssh_runner.go:195] Run: which crictl
	I0420 01:15:46.376650  134009 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:15:46.427695  134009 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:15:46.427796  134009 ssh_runner.go:195] Run: crio --version
	I0420 01:15:46.466206  134009 ssh_runner.go:195] Run: crio --version
	I0420 01:15:46.510685  134009 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0420 01:15:46.511923  134009 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:15:46.515290  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:46.515791  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:15:46.515819  134009 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:15:46.516039  134009 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0420 01:15:46.520999  134009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:15:46.537248  134009 kubeadm.go:877] updating cluster {Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:15:46.537417  134009 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:15:46.537471  134009 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:15:46.586979  134009 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:15:46.587063  134009 ssh_runner.go:195] Run: which lz4
	I0420 01:15:46.591909  134009 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0420 01:15:46.596672  134009 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:15:46.596711  134009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0420 01:15:48.914196  134009 crio.go:462] duration metric: took 2.322333618s to copy over tarball
	I0420 01:15:48.914311  134009 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:15:52.356652  134009 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.442300936s)
	I0420 01:15:52.356683  134009 crio.go:469] duration metric: took 3.442446157s to extract the tarball
	I0420 01:15:52.356692  134009 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:15:52.406463  134009 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:15:52.461620  134009 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:15:52.461655  134009 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0420 01:15:52.461735  134009 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:15:52.461748  134009 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:15:52.461811  134009 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:15:52.461816  134009 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0420 01:15:52.461833  134009 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:15:52.461809  134009 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0420 01:15:52.461785  134009 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:15:52.461965  134009 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:15:52.463506  134009 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:15:52.463506  134009 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0420 01:15:52.463523  134009 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:15:52.463520  134009 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0420 01:15:52.463507  134009 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:15:52.463520  134009 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:15:52.463559  134009 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:15:52.463878  134009 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:15:52.605943  134009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:15:52.616470  134009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0420 01:15:52.621544  134009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:15:52.626625  134009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0420 01:15:52.627503  134009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:15:52.638438  134009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:15:52.699275  134009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0420 01:15:52.747751  134009 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:15:52.782955  134009 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0420 01:15:52.788059  134009 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:15:52.788124  134009 ssh_runner.go:195] Run: which crictl
	I0420 01:15:52.826564  134009 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0420 01:15:52.826616  134009 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0420 01:15:52.826672  134009 ssh_runner.go:195] Run: which crictl
	I0420 01:15:52.832374  134009 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0420 01:15:52.832425  134009 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:15:52.832459  134009 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0420 01:15:52.832483  134009 ssh_runner.go:195] Run: which crictl
	I0420 01:15:52.832499  134009 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:15:52.832538  134009 ssh_runner.go:195] Run: which crictl
	I0420 01:15:52.832562  134009 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0420 01:15:52.832597  134009 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:15:52.832636  134009 ssh_runner.go:195] Run: which crictl
	I0420 01:15:52.832681  134009 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0420 01:15:52.832713  134009 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:15:52.832750  134009 ssh_runner.go:195] Run: which crictl
	I0420 01:15:52.994100  134009 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0420 01:15:52.994136  134009 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0420 01:15:52.994173  134009 ssh_runner.go:195] Run: which crictl
	I0420 01:15:52.994229  134009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:15:52.994258  134009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0420 01:15:52.994301  134009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:15:52.994347  134009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:15:52.994380  134009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0420 01:15:52.994424  134009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:15:53.163216  134009 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0420 01:15:53.163289  134009 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0420 01:15:53.163288  134009 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0420 01:15:53.163342  134009 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0420 01:15:53.163415  134009 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0420 01:15:53.163450  134009 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0420 01:15:53.163488  134009 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0420 01:15:53.222022  134009 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0420 01:15:53.222083  134009 cache_images.go:92] duration metric: took 760.41214ms to LoadCachedImages
	W0420 01:15:53.222206  134009 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0420 01:15:53.222228  134009 kubeadm.go:928] updating node { 192.168.61.91 8443 v1.20.0 crio true true} ...
	I0420 01:15:53.222379  134009 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-564860 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:15:53.222454  134009 ssh_runner.go:195] Run: crio config
	I0420 01:15:53.291409  134009 cni.go:84] Creating CNI manager for ""
	I0420 01:15:53.291438  134009 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:15:53.291451  134009 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:15:53.291474  134009 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-564860 NodeName:old-k8s-version-564860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0420 01:15:53.291648  134009 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-564860"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:15:53.291727  134009 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0420 01:15:53.305215  134009 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:15:53.305289  134009 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:15:53.318721  134009 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0420 01:15:53.345463  134009 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:15:53.369970  134009 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0420 01:15:53.403103  134009 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I0420 01:15:53.409701  134009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:15:53.428232  134009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:15:53.607068  134009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:15:53.638914  134009 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860 for IP: 192.168.61.91
	I0420 01:15:53.638945  134009 certs.go:194] generating shared ca certs ...
	I0420 01:15:53.638967  134009 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:15:53.639153  134009 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:15:53.639226  134009 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:15:53.639241  134009 certs.go:256] generating profile certs ...
	I0420 01:15:53.639316  134009 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/client.key
	I0420 01:15:53.639335  134009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/client.crt with IP's: []
	I0420 01:15:53.838671  134009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/client.crt ...
	I0420 01:15:53.838705  134009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/client.crt: {Name:mka230af60e2ea837b47fba2511891be76d78b44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:15:53.838923  134009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/client.key ...
	I0420 01:15:53.838985  134009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/client.key: {Name:mk7c107c7e5e47fde871bd6dbf4bf77372dffc65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:15:53.839501  134009 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key.d235183f
	I0420 01:15:53.839540  134009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.crt.d235183f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.91]
	I0420 01:15:53.995396  134009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.crt.d235183f ...
	I0420 01:15:53.995439  134009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.crt.d235183f: {Name:mkb221f25823c555005fd3dbb9895ddfbe5ca0e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:15:53.995659  134009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key.d235183f ...
	I0420 01:15:53.995682  134009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key.d235183f: {Name:mk9e045eb4ec636c145adb179632dc58818a253a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:15:53.995793  134009 certs.go:381] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.crt.d235183f -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.crt
	I0420 01:15:53.995916  134009 certs.go:385] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key.d235183f -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key
	I0420 01:15:53.996002  134009 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key
	I0420 01:15:53.996025  134009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.crt with IP's: []
	I0420 01:15:54.076787  134009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.crt ...
	I0420 01:15:54.076826  134009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.crt: {Name:mk6c99d0f7e0721540255780d05a868007a0f2f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:15:54.077023  134009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key ...
	I0420 01:15:54.077043  134009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key: {Name:mkfed778643a193b2e5b4d9d72573a4e491b3b66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:15:54.077247  134009 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:15:54.077302  134009 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:15:54.077332  134009 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:15:54.077375  134009 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:15:54.077409  134009 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:15:54.077445  134009 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:15:54.077499  134009 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:15:54.078370  134009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:15:54.110400  134009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:15:54.147469  134009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:15:54.181375  134009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:15:54.217289  134009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0420 01:15:54.256738  134009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:15:54.297735  134009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:15:54.338143  134009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:15:54.379692  134009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:15:54.421883  134009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:15:54.460676  134009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:15:54.502704  134009 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:15:54.531096  134009 ssh_runner.go:195] Run: openssl version
	I0420 01:15:54.540737  134009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:15:54.560106  134009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:15:54.568014  134009 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:15:54.568089  134009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:15:54.577739  134009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:15:54.597506  134009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:15:54.618051  134009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:15:54.626459  134009 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:15:54.626590  134009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:15:54.636334  134009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:15:54.655230  134009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:15:54.682920  134009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:15:54.694277  134009 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:15:54.694413  134009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:15:54.704301  134009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:15:54.726154  134009 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:15:54.736607  134009 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0420 01:15:54.736772  134009 kubeadm.go:391] StartCluster: {Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:15:54.736890  134009 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:15:54.736987  134009 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:15:54.801150  134009 cri.go:89] found id: ""
	I0420 01:15:54.801238  134009 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0420 01:15:54.824284  134009 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:15:54.837791  134009 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:15:54.850937  134009 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:15:54.850963  134009 kubeadm.go:156] found existing configuration files:
	
	I0420 01:15:54.851026  134009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:15:54.862150  134009 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:15:54.862231  134009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:15:54.874460  134009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:15:54.886409  134009 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:15:54.886478  134009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:15:54.898473  134009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:15:54.916474  134009 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:15:54.916550  134009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:15:54.928655  134009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:15:54.942866  134009 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:15:54.942921  134009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:15:54.960453  134009 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:15:55.191072  134009 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:15:55.191931  134009 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:15:55.425109  134009 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:15:55.425281  134009 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:15:55.425452  134009 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:15:55.762478  134009 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:15:55.764487  134009 out.go:204]   - Generating certificates and keys ...
	I0420 01:15:55.764735  134009 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:15:55.764842  134009 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:15:55.942496  134009 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0420 01:15:56.346124  134009 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0420 01:15:56.620774  134009 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0420 01:15:56.828750  134009 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0420 01:15:56.963792  134009 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0420 01:15:56.964159  134009 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-564860] and IPs [192.168.61.91 127.0.0.1 ::1]
	I0420 01:15:57.088868  134009 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0420 01:15:57.089166  134009 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-564860] and IPs [192.168.61.91 127.0.0.1 ::1]
	I0420 01:15:57.332624  134009 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0420 01:15:57.548035  134009 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0420 01:15:57.760985  134009 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0420 01:15:57.761449  134009 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:15:57.951228  134009 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:15:58.237742  134009 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:15:58.442404  134009 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:15:58.879063  134009 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:15:58.907235  134009 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:15:58.909708  134009 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:15:58.909809  134009 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:15:59.143128  134009 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:15:59.144528  134009 out.go:204]   - Booting up control plane ...
	I0420 01:15:59.144720  134009 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:15:59.158989  134009 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:15:59.162938  134009 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:15:59.163042  134009 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:15:59.166695  134009 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:16:39.165827  134009 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:16:39.166726  134009 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:16:39.167135  134009 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:16:44.167917  134009 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:16:44.168131  134009 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:16:54.168791  134009 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:16:54.168966  134009 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:17:14.170201  134009 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:17:14.170463  134009 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:17:54.170679  134009 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:17:54.170924  134009 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:17:54.170939  134009 kubeadm.go:309] 
	I0420 01:17:54.170990  134009 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:17:54.171037  134009 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:17:54.171044  134009 kubeadm.go:309] 
	I0420 01:17:54.171084  134009 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:17:54.171133  134009 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:17:54.171262  134009 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:17:54.171270  134009 kubeadm.go:309] 
	I0420 01:17:54.171419  134009 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:17:54.171461  134009 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:17:54.171502  134009 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:17:54.171509  134009 kubeadm.go:309] 
	I0420 01:17:54.171649  134009 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:17:54.171752  134009 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:17:54.171759  134009 kubeadm.go:309] 
	I0420 01:17:54.171881  134009 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:17:54.171988  134009 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:17:54.172079  134009 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:17:54.172174  134009 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:17:54.172180  134009 kubeadm.go:309] 
	I0420 01:17:54.173502  134009 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:17:54.173617  134009 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:17:54.173709  134009 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0420 01:17:54.173870  134009 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-564860] and IPs [192.168.61.91 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-564860] and IPs [192.168.61.91 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-564860] and IPs [192.168.61.91 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-564860] and IPs [192.168.61.91 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0420 01:17:54.173935  134009 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:17:57.687670  134009 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.513697766s)
	I0420 01:17:57.687766  134009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:17:57.703403  134009 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:17:57.716994  134009 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:17:57.717016  134009 kubeadm.go:156] found existing configuration files:
	
	I0420 01:17:57.717074  134009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:17:57.728058  134009 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:17:57.728127  134009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:17:57.740085  134009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:17:57.751954  134009 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:17:57.752008  134009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:17:57.764751  134009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:17:57.776495  134009 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:17:57.776558  134009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:17:57.789068  134009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:17:57.801292  134009 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:17:57.801374  134009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:17:57.813709  134009 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:17:57.886054  134009 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:17:57.886142  134009 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:17:58.042160  134009 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:17:58.042372  134009 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:17:58.042534  134009 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:17:58.265363  134009 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:17:58.267312  134009 out.go:204]   - Generating certificates and keys ...
	I0420 01:17:58.267428  134009 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:17:58.267528  134009 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:17:58.267635  134009 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:17:58.267733  134009 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:17:58.267845  134009 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:17:58.267919  134009 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:17:58.268313  134009 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:17:58.268736  134009 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:17:58.269083  134009 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:17:58.269597  134009 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:17:58.269766  134009 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:17:58.269871  134009 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:17:58.451046  134009 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:17:58.658719  134009 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:17:58.983509  134009 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:17:59.140622  134009 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:17:59.157139  134009 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:17:59.159325  134009 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:17:59.159399  134009 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:17:59.333757  134009 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:17:59.335304  134009 out.go:204]   - Booting up control plane ...
	I0420 01:17:59.335445  134009 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:17:59.341474  134009 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:17:59.342485  134009 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:17:59.343133  134009 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:17:59.345361  134009 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:18:39.347619  134009 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:18:39.347911  134009 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:18:39.348092  134009 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:18:44.348471  134009 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:18:44.348737  134009 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:18:54.349506  134009 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:18:54.349678  134009 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:19:14.350978  134009 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:19:14.351247  134009 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:19:54.350834  134009 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:19:54.351152  134009 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:19:54.351179  134009 kubeadm.go:309] 
	I0420 01:19:54.351225  134009 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:19:54.351273  134009 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:19:54.351285  134009 kubeadm.go:309] 
	I0420 01:19:54.351332  134009 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:19:54.351386  134009 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:19:54.351518  134009 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:19:54.351530  134009 kubeadm.go:309] 
	I0420 01:19:54.351646  134009 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:19:54.351703  134009 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:19:54.351740  134009 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:19:54.351748  134009 kubeadm.go:309] 
	I0420 01:19:54.351853  134009 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:19:54.351964  134009 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:19:54.351977  134009 kubeadm.go:309] 
	I0420 01:19:54.352065  134009 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:19:54.352152  134009 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:19:54.352231  134009 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:19:54.352310  134009 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:19:54.352323  134009 kubeadm.go:309] 
	I0420 01:19:54.352742  134009 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:19:54.352814  134009 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:19:54.352866  134009 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0420 01:19:54.352928  134009 kubeadm.go:393] duration metric: took 3m59.616199952s to StartCluster
	I0420 01:19:54.352976  134009 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:19:54.353033  134009 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:19:54.399967  134009 cri.go:89] found id: ""
	I0420 01:19:54.400001  134009 logs.go:276] 0 containers: []
	W0420 01:19:54.400012  134009 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:19:54.400019  134009 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:19:54.400085  134009 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:19:54.441141  134009 cri.go:89] found id: ""
	I0420 01:19:54.441165  134009 logs.go:276] 0 containers: []
	W0420 01:19:54.441173  134009 logs.go:278] No container was found matching "etcd"
	I0420 01:19:54.441178  134009 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:19:54.441233  134009 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:19:54.483205  134009 cri.go:89] found id: ""
	I0420 01:19:54.483242  134009 logs.go:276] 0 containers: []
	W0420 01:19:54.483255  134009 logs.go:278] No container was found matching "coredns"
	I0420 01:19:54.483269  134009 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:19:54.483343  134009 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:19:54.520315  134009 cri.go:89] found id: ""
	I0420 01:19:54.520346  134009 logs.go:276] 0 containers: []
	W0420 01:19:54.520356  134009 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:19:54.520366  134009 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:19:54.520413  134009 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:19:54.560071  134009 cri.go:89] found id: ""
	I0420 01:19:54.560113  134009 logs.go:276] 0 containers: []
	W0420 01:19:54.560122  134009 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:19:54.560128  134009 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:19:54.560184  134009 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:19:54.598166  134009 cri.go:89] found id: ""
	I0420 01:19:54.598194  134009 logs.go:276] 0 containers: []
	W0420 01:19:54.598202  134009 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:19:54.598209  134009 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:19:54.598272  134009 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:19:54.647197  134009 cri.go:89] found id: ""
	I0420 01:19:54.647223  134009 logs.go:276] 0 containers: []
	W0420 01:19:54.647231  134009 logs.go:278] No container was found matching "kindnet"
	I0420 01:19:54.647240  134009 logs.go:123] Gathering logs for kubelet ...
	I0420 01:19:54.647254  134009 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:19:54.702147  134009 logs.go:123] Gathering logs for dmesg ...
	I0420 01:19:54.702176  134009 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:19:54.717713  134009 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:19:54.717739  134009 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:19:54.881082  134009 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:19:54.881106  134009 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:19:54.881123  134009 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:19:54.976346  134009 logs.go:123] Gathering logs for container status ...
	I0420 01:19:54.976386  134009 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0420 01:19:55.022223  134009 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0420 01:19:55.022274  134009 out.go:239] * 
	* 
	W0420 01:19:55.022329  134009 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:19:55.022355  134009 out.go:239] * 
	* 
	W0420 01:19:55.023197  134009 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0420 01:19:55.026843  134009 out.go:177] 
	W0420 01:19:55.028625  134009 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:19:55.028685  134009 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0420 01:19:55.028714  134009 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0420 01:19:55.030419  134009 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-564860 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-564860 -n old-k8s-version-564860
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-564860 -n old-k8s-version-564860: exit status 6 (257.050929ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 01:19:55.332003  141381 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-564860" does not appear in /home/jenkins/minikube-integration/18703-76456/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-564860" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (327.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-338118 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-338118 --alsologtostderr -v=3: exit status 82 (2m0.563823222s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-338118"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 01:17:54.183636  140670 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:17:54.183746  140670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:17:54.183758  140670 out.go:304] Setting ErrFile to fd 2...
	I0420 01:17:54.183763  140670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:17:54.184133  140670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:17:54.184446  140670 out.go:298] Setting JSON to false
	I0420 01:17:54.184548  140670 mustload.go:65] Loading cluster: no-preload-338118
	I0420 01:17:54.185090  140670 config.go:182] Loaded profile config "no-preload-338118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:17:54.185212  140670 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/config.json ...
	I0420 01:17:54.185448  140670 mustload.go:65] Loading cluster: no-preload-338118
	I0420 01:17:54.185641  140670 config.go:182] Loaded profile config "no-preload-338118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:17:54.185672  140670 stop.go:39] StopHost: no-preload-338118
	I0420 01:17:54.186240  140670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:17:54.186285  140670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:17:54.205498  140670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
	I0420 01:17:54.209443  140670 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:17:54.210109  140670 main.go:141] libmachine: Using API Version  1
	I0420 01:17:54.210129  140670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:17:54.210547  140670 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:17:54.213031  140670 out.go:177] * Stopping node "no-preload-338118"  ...
	I0420 01:17:54.215046  140670 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0420 01:17:54.215073  140670 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:17:54.215345  140670 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0420 01:17:54.215364  140670 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:17:54.218912  140670 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:17:54.219381  140670 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:16:03 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:17:54.219419  140670 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:17:54.219658  140670 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:17:54.219852  140670 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:17:54.220006  140670 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:17:54.220171  140670 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:17:54.377554  140670 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0420 01:17:54.419247  140670 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0420 01:17:54.479949  140670 main.go:141] libmachine: Stopping "no-preload-338118"...
	I0420 01:17:54.479993  140670 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:17:54.482201  140670 main.go:141] libmachine: (no-preload-338118) Calling .Stop
	I0420 01:17:54.486230  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 0/120
	I0420 01:17:55.487822  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 1/120
	I0420 01:17:56.489080  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 2/120
	I0420 01:17:57.490477  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 3/120
	I0420 01:17:58.492475  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 4/120
	I0420 01:17:59.494511  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 5/120
	I0420 01:18:00.496444  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 6/120
	I0420 01:18:01.497674  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 7/120
	I0420 01:18:02.498937  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 8/120
	I0420 01:18:03.500109  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 9/120
	I0420 01:18:04.502127  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 10/120
	I0420 01:18:05.503645  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 11/120
	I0420 01:18:06.504980  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 12/120
	I0420 01:18:07.506238  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 13/120
	I0420 01:18:08.507612  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 14/120
	I0420 01:18:09.509532  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 15/120
	I0420 01:18:10.511698  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 16/120
	I0420 01:18:11.512999  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 17/120
	I0420 01:18:12.514218  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 18/120
	I0420 01:18:13.515409  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 19/120
	I0420 01:18:14.517525  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 20/120
	I0420 01:18:15.518905  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 21/120
	I0420 01:18:16.520031  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 22/120
	I0420 01:18:17.521331  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 23/120
	I0420 01:18:18.522647  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 24/120
	I0420 01:18:19.524903  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 25/120
	I0420 01:18:20.526189  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 26/120
	I0420 01:18:21.527436  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 27/120
	I0420 01:18:22.528687  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 28/120
	I0420 01:18:23.530580  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 29/120
	I0420 01:18:24.532716  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 30/120
	I0420 01:18:25.534249  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 31/120
	I0420 01:18:26.535675  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 32/120
	I0420 01:18:27.537222  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 33/120
	I0420 01:18:28.539456  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 34/120
	I0420 01:18:29.541567  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 35/120
	I0420 01:18:30.543097  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 36/120
	I0420 01:18:31.544562  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 37/120
	I0420 01:18:32.545922  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 38/120
	I0420 01:18:33.547231  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 39/120
	I0420 01:18:34.549202  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 40/120
	I0420 01:18:35.550797  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 41/120
	I0420 01:18:36.552307  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 42/120
	I0420 01:18:37.553668  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 43/120
	I0420 01:18:38.555577  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 44/120
	I0420 01:18:39.557542  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 45/120
	I0420 01:18:40.558830  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 46/120
	I0420 01:18:41.560351  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 47/120
	I0420 01:18:42.561690  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 48/120
	I0420 01:18:43.563746  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 49/120
	I0420 01:18:44.565793  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 50/120
	I0420 01:18:45.567053  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 51/120
	I0420 01:18:46.568672  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 52/120
	I0420 01:18:47.570043  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 53/120
	I0420 01:18:48.571426  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 54/120
	I0420 01:18:49.573194  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 55/120
	I0420 01:18:50.574455  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 56/120
	I0420 01:18:51.576190  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 57/120
	I0420 01:18:52.577475  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 58/120
	I0420 01:18:53.578994  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 59/120
	I0420 01:18:54.580249  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 60/120
	I0420 01:18:55.581500  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 61/120
	I0420 01:18:56.582875  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 62/120
	I0420 01:18:57.584135  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 63/120
	I0420 01:18:58.585586  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 64/120
	I0420 01:18:59.587542  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 65/120
	I0420 01:19:00.588978  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 66/120
	I0420 01:19:01.590286  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 67/120
	I0420 01:19:02.591734  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 68/120
	I0420 01:19:03.593350  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 69/120
	I0420 01:19:04.595713  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 70/120
	I0420 01:19:05.597065  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 71/120
	I0420 01:19:06.598434  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 72/120
	I0420 01:19:07.599629  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 73/120
	I0420 01:19:08.601002  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 74/120
	I0420 01:19:09.602950  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 75/120
	I0420 01:19:10.604267  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 76/120
	I0420 01:19:11.605551  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 77/120
	I0420 01:19:12.607265  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 78/120
	I0420 01:19:13.608663  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 79/120
	I0420 01:19:14.611027  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 80/120
	I0420 01:19:15.612508  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 81/120
	I0420 01:19:16.614753  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 82/120
	I0420 01:19:17.616645  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 83/120
	I0420 01:19:18.618033  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 84/120
	I0420 01:19:19.620098  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 85/120
	I0420 01:19:20.622103  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 86/120
	I0420 01:19:21.623836  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 87/120
	I0420 01:19:22.625222  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 88/120
	I0420 01:19:23.626548  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 89/120
	I0420 01:19:24.628614  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 90/120
	I0420 01:19:25.630189  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 91/120
	I0420 01:19:26.631475  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 92/120
	I0420 01:19:27.632696  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 93/120
	I0420 01:19:28.634044  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 94/120
	I0420 01:19:29.636205  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 95/120
	I0420 01:19:30.637590  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 96/120
	I0420 01:19:31.638821  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 97/120
	I0420 01:19:32.640267  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 98/120
	I0420 01:19:33.641573  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 99/120
	I0420 01:19:34.643938  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 100/120
	I0420 01:19:35.645253  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 101/120
	I0420 01:19:36.646616  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 102/120
	I0420 01:19:37.648030  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 103/120
	I0420 01:19:38.649616  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 104/120
	I0420 01:19:39.651870  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 105/120
	I0420 01:19:40.653296  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 106/120
	I0420 01:19:41.654856  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 107/120
	I0420 01:19:42.656289  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 108/120
	I0420 01:19:43.657653  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 109/120
	I0420 01:19:44.659934  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 110/120
	I0420 01:19:45.661406  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 111/120
	I0420 01:19:46.662876  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 112/120
	I0420 01:19:47.664162  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 113/120
	I0420 01:19:48.665618  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 114/120
	I0420 01:19:49.667566  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 115/120
	I0420 01:19:50.668838  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 116/120
	I0420 01:19:51.670396  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 117/120
	I0420 01:19:52.671886  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 118/120
	I0420 01:19:53.673167  140670 main.go:141] libmachine: (no-preload-338118) Waiting for machine to stop 119/120
	I0420 01:19:54.673842  140670 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0420 01:19:54.673895  140670 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0420 01:19:54.675865  140670 out.go:177] 
	W0420 01:19:54.677230  140670 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0420 01:19:54.677247  140670 out.go:239] * 
	* 
	W0420 01:19:54.681232  140670 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0420 01:19:54.682788  140670 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-338118 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-338118 -n no-preload-338118
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-338118 -n no-preload-338118: exit status 3 (18.573170278s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 01:20:13.257681  141334 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.89:22: connect: no route to host
	E0420 01:20:13.257707  141334 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.89:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-338118" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-907988 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-907988 --alsologtostderr -v=3: exit status 82 (2m0.515681486s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-907988"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 01:18:07.454062  140819 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:18:07.454327  140819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:18:07.454337  140819 out.go:304] Setting ErrFile to fd 2...
	I0420 01:18:07.454341  140819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:18:07.454857  140819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:18:07.455262  140819 out.go:298] Setting JSON to false
	I0420 01:18:07.455370  140819 mustload.go:65] Loading cluster: default-k8s-diff-port-907988
	I0420 01:18:07.456455  140819 config.go:182] Loaded profile config "default-k8s-diff-port-907988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:18:07.456638  140819 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/config.json ...
	I0420 01:18:07.456893  140819 mustload.go:65] Loading cluster: default-k8s-diff-port-907988
	I0420 01:18:07.457052  140819 config.go:182] Loaded profile config "default-k8s-diff-port-907988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:18:07.457106  140819 stop.go:39] StopHost: default-k8s-diff-port-907988
	I0420 01:18:07.457549  140819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:18:07.457592  140819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:18:07.472198  140819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
	I0420 01:18:07.472614  140819 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:18:07.473155  140819 main.go:141] libmachine: Using API Version  1
	I0420 01:18:07.473179  140819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:18:07.473542  140819 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:18:07.476063  140819 out.go:177] * Stopping node "default-k8s-diff-port-907988"  ...
	I0420 01:18:07.477533  140819 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0420 01:18:07.477574  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:18:07.477802  140819 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0420 01:18:07.477847  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:18:07.480710  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:18:07.481163  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:17:11 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:18:07.481192  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:18:07.481393  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:18:07.481561  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:18:07.481773  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:18:07.481967  140819 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:18:07.583795  140819 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0420 01:18:07.650687  140819 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0420 01:18:07.709217  140819 main.go:141] libmachine: Stopping "default-k8s-diff-port-907988"...
	I0420 01:18:07.709267  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:18:07.711098  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Stop
	I0420 01:18:07.714620  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 0/120
	I0420 01:18:08.716080  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 1/120
	I0420 01:18:09.717888  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 2/120
	I0420 01:18:10.719882  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 3/120
	I0420 01:18:11.721226  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 4/120
	I0420 01:18:12.722626  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 5/120
	I0420 01:18:13.724147  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 6/120
	I0420 01:18:14.725628  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 7/120
	I0420 01:18:15.727101  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 8/120
	I0420 01:18:16.728518  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 9/120
	I0420 01:18:17.730690  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 10/120
	I0420 01:18:18.732050  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 11/120
	I0420 01:18:19.733816  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 12/120
	I0420 01:18:20.735130  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 13/120
	I0420 01:18:21.736531  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 14/120
	I0420 01:18:22.738325  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 15/120
	I0420 01:18:23.739847  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 16/120
	I0420 01:18:24.741439  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 17/120
	I0420 01:18:25.742861  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 18/120
	I0420 01:18:26.744283  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 19/120
	I0420 01:18:27.746659  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 20/120
	I0420 01:18:28.748099  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 21/120
	I0420 01:18:29.750001  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 22/120
	I0420 01:18:30.751851  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 23/120
	I0420 01:18:31.753192  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 24/120
	I0420 01:18:32.755281  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 25/120
	I0420 01:18:33.756560  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 26/120
	I0420 01:18:34.757865  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 27/120
	I0420 01:18:35.759369  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 28/120
	I0420 01:18:36.760898  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 29/120
	I0420 01:18:37.762108  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 30/120
	I0420 01:18:38.763279  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 31/120
	I0420 01:18:39.764703  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 32/120
	I0420 01:18:40.766073  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 33/120
	I0420 01:18:41.767258  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 34/120
	I0420 01:18:42.769053  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 35/120
	I0420 01:18:43.770392  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 36/120
	I0420 01:18:44.771694  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 37/120
	I0420 01:18:45.773128  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 38/120
	I0420 01:18:46.774489  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 39/120
	I0420 01:18:47.776274  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 40/120
	I0420 01:18:48.777574  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 41/120
	I0420 01:18:49.779076  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 42/120
	I0420 01:18:50.780402  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 43/120
	I0420 01:18:51.781762  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 44/120
	I0420 01:18:52.783539  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 45/120
	I0420 01:18:53.784714  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 46/120
	I0420 01:18:54.786159  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 47/120
	I0420 01:18:55.787362  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 48/120
	I0420 01:18:56.788839  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 49/120
	I0420 01:18:57.791160  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 50/120
	I0420 01:18:58.792623  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 51/120
	I0420 01:18:59.793988  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 52/120
	I0420 01:19:00.795413  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 53/120
	I0420 01:19:01.796680  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 54/120
	I0420 01:19:02.798507  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 55/120
	I0420 01:19:03.799998  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 56/120
	I0420 01:19:04.801330  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 57/120
	I0420 01:19:05.802654  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 58/120
	I0420 01:19:06.803936  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 59/120
	I0420 01:19:07.806266  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 60/120
	I0420 01:19:08.807735  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 61/120
	I0420 01:19:09.809148  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 62/120
	I0420 01:19:10.810458  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 63/120
	I0420 01:19:11.811856  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 64/120
	I0420 01:19:12.813999  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 65/120
	I0420 01:19:13.815411  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 66/120
	I0420 01:19:14.816923  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 67/120
	I0420 01:19:15.818319  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 68/120
	I0420 01:19:16.819729  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 69/120
	I0420 01:19:17.821932  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 70/120
	I0420 01:19:18.823708  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 71/120
	I0420 01:19:19.825248  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 72/120
	I0420 01:19:20.826518  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 73/120
	I0420 01:19:21.827808  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 74/120
	I0420 01:19:22.829925  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 75/120
	I0420 01:19:23.831790  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 76/120
	I0420 01:19:24.833061  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 77/120
	I0420 01:19:25.834250  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 78/120
	I0420 01:19:26.835611  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 79/120
	I0420 01:19:27.837686  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 80/120
	I0420 01:19:28.840051  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 81/120
	I0420 01:19:29.841659  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 82/120
	I0420 01:19:30.843025  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 83/120
	I0420 01:19:31.844748  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 84/120
	I0420 01:19:32.846913  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 85/120
	I0420 01:19:33.848242  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 86/120
	I0420 01:19:34.849935  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 87/120
	I0420 01:19:35.851901  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 88/120
	I0420 01:19:36.853298  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 89/120
	I0420 01:19:37.855520  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 90/120
	I0420 01:19:38.857017  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 91/120
	I0420 01:19:39.858449  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 92/120
	I0420 01:19:40.859722  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 93/120
	I0420 01:19:41.861360  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 94/120
	I0420 01:19:42.863407  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 95/120
	I0420 01:19:43.864973  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 96/120
	I0420 01:19:44.866498  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 97/120
	I0420 01:19:45.867953  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 98/120
	I0420 01:19:46.869507  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 99/120
	I0420 01:19:47.871786  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 100/120
	I0420 01:19:48.873027  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 101/120
	I0420 01:19:49.874416  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 102/120
	I0420 01:19:50.875616  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 103/120
	I0420 01:19:51.877140  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 104/120
	I0420 01:19:52.879180  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 105/120
	I0420 01:19:53.880452  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 106/120
	I0420 01:19:54.881992  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 107/120
	I0420 01:19:55.883905  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 108/120
	I0420 01:19:56.885436  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 109/120
	I0420 01:19:57.887492  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 110/120
	I0420 01:19:58.888985  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 111/120
	I0420 01:19:59.891053  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 112/120
	I0420 01:20:00.892522  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 113/120
	I0420 01:20:01.893913  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 114/120
	I0420 01:20:02.895841  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 115/120
	I0420 01:20:03.897727  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 116/120
	I0420 01:20:04.899837  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 117/120
	I0420 01:20:05.901658  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 118/120
	I0420 01:20:06.903275  140819 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for machine to stop 119/120
	I0420 01:20:07.904624  140819 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0420 01:20:07.904678  140819 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0420 01:20:07.906434  140819 out.go:177] 
	W0420 01:20:07.907827  140819 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0420 01:20:07.907842  140819 out.go:239] * 
	* 
	W0420 01:20:07.911557  140819 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0420 01:20:07.913213  140819 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-907988 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-907988 -n default-k8s-diff-port-907988
E0420 01:20:11.828682   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-907988 -n default-k8s-diff-port-907988: exit status 3 (18.654922154s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 01:20:26.569647  141543 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0420 01:20:26.569672  141543 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-907988" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-269507 --alsologtostderr -v=3
E0420 01:18:49.905537   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:18:49.910797   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:18:49.921023   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:18:49.941322   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:18:49.981579   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:18:50.061929   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:18:50.222329   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:18:50.542996   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:18:51.184100   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:18:52.464934   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:18:54.475284   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:18:54.480548   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:18:54.490786   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:18:54.511050   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:18:54.551340   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:18:54.631489   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:18:54.792450   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:18:55.025898   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:18:55.113197   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:18:55.754131   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:18:57.035226   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:18:59.595816   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:19:00.146985   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:19:04.716081   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:19:10.387305   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:19:13.951951   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/auto-831611/client.crt: no such file or directory
E0420 01:19:14.956285   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:19:30.868441   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:19:35.437472   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:19:36.575279   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
E0420 01:19:36.580559   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
E0420 01:19:36.590861   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
E0420 01:19:36.611196   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
E0420 01:19:36.651883   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
E0420 01:19:36.732289   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
E0420 01:19:36.892616   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
E0420 01:19:37.213266   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
E0420 01:19:37.853609   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
E0420 01:19:38.345382   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:19:39.134512   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
E0420 01:19:41.695334   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
E0420 01:19:46.816300   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-269507 --alsologtostderr -v=3: exit status 82 (2m0.54210684s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-269507"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 01:18:19.877924  140955 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:18:19.878226  140955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:18:19.878237  140955 out.go:304] Setting ErrFile to fd 2...
	I0420 01:18:19.878244  140955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:18:19.878456  140955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:18:19.878707  140955 out.go:298] Setting JSON to false
	I0420 01:18:19.878807  140955 mustload.go:65] Loading cluster: embed-certs-269507
	I0420 01:18:19.879160  140955 config.go:182] Loaded profile config "embed-certs-269507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:18:19.879242  140955 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/config.json ...
	I0420 01:18:19.879430  140955 mustload.go:65] Loading cluster: embed-certs-269507
	I0420 01:18:19.879560  140955 config.go:182] Loaded profile config "embed-certs-269507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:18:19.879605  140955 stop.go:39] StopHost: embed-certs-269507
	I0420 01:18:19.879996  140955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:18:19.880055  140955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:18:19.894742  140955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37769
	I0420 01:18:19.895221  140955 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:18:19.895808  140955 main.go:141] libmachine: Using API Version  1
	I0420 01:18:19.895833  140955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:18:19.896149  140955 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:18:19.898666  140955 out.go:177] * Stopping node "embed-certs-269507"  ...
	I0420 01:18:19.899906  140955 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0420 01:18:19.899930  140955 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:18:19.900151  140955 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0420 01:18:19.900187  140955 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:18:19.903027  140955 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:18:19.903443  140955 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:18:19.903475  140955 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:18:19.903573  140955 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:18:19.903712  140955 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:18:19.903899  140955 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:18:19.904060  140955 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:18:20.026043  140955 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0420 01:18:20.087875  140955 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0420 01:18:20.163350  140955 main.go:141] libmachine: Stopping "embed-certs-269507"...
	I0420 01:18:20.163405  140955 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:18:20.165014  140955 main.go:141] libmachine: (embed-certs-269507) Calling .Stop
	I0420 01:18:20.168751  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 0/120
	I0420 01:18:21.170260  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 1/120
	I0420 01:18:22.171603  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 2/120
	I0420 01:18:23.172752  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 3/120
	I0420 01:18:24.174136  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 4/120
	I0420 01:18:25.176409  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 5/120
	I0420 01:18:26.177742  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 6/120
	I0420 01:18:27.179322  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 7/120
	I0420 01:18:28.180792  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 8/120
	I0420 01:18:29.182320  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 9/120
	I0420 01:18:30.183679  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 10/120
	I0420 01:18:31.185344  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 11/120
	I0420 01:18:32.186562  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 12/120
	I0420 01:18:33.187962  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 13/120
	I0420 01:18:34.189215  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 14/120
	I0420 01:18:35.191312  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 15/120
	I0420 01:18:36.192740  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 16/120
	I0420 01:18:37.194120  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 17/120
	I0420 01:18:38.195543  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 18/120
	I0420 01:18:39.196852  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 19/120
	I0420 01:18:40.199122  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 20/120
	I0420 01:18:41.200574  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 21/120
	I0420 01:18:42.201853  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 22/120
	I0420 01:18:43.203173  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 23/120
	I0420 01:18:44.204447  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 24/120
	I0420 01:18:45.206384  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 25/120
	I0420 01:18:46.207764  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 26/120
	I0420 01:18:47.209907  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 27/120
	I0420 01:18:48.211503  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 28/120
	I0420 01:18:49.212805  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 29/120
	I0420 01:18:50.215076  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 30/120
	I0420 01:18:51.216205  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 31/120
	I0420 01:18:52.217577  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 32/120
	I0420 01:18:53.218790  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 33/120
	I0420 01:18:54.220033  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 34/120
	I0420 01:18:55.222069  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 35/120
	I0420 01:18:56.223340  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 36/120
	I0420 01:18:57.224730  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 37/120
	I0420 01:18:58.226131  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 38/120
	I0420 01:18:59.227630  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 39/120
	I0420 01:19:00.229714  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 40/120
	I0420 01:19:01.231748  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 41/120
	I0420 01:19:02.233076  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 42/120
	I0420 01:19:03.234568  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 43/120
	I0420 01:19:04.235813  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 44/120
	I0420 01:19:05.238051  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 45/120
	I0420 01:19:06.239425  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 46/120
	I0420 01:19:07.240797  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 47/120
	I0420 01:19:08.242202  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 48/120
	I0420 01:19:09.243690  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 49/120
	I0420 01:19:10.245953  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 50/120
	I0420 01:19:11.247390  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 51/120
	I0420 01:19:12.248748  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 52/120
	I0420 01:19:13.250135  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 53/120
	I0420 01:19:14.251618  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 54/120
	I0420 01:19:15.253650  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 55/120
	I0420 01:19:16.255042  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 56/120
	I0420 01:19:17.256390  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 57/120
	I0420 01:19:18.257836  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 58/120
	I0420 01:19:19.259020  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 59/120
	I0420 01:19:20.261432  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 60/120
	I0420 01:19:21.262764  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 61/120
	I0420 01:19:22.264051  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 62/120
	I0420 01:19:23.265323  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 63/120
	I0420 01:19:24.266621  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 64/120
	I0420 01:19:25.268494  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 65/120
	I0420 01:19:26.270197  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 66/120
	I0420 01:19:27.271642  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 67/120
	I0420 01:19:28.272971  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 68/120
	I0420 01:19:29.274330  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 69/120
	I0420 01:19:30.276483  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 70/120
	I0420 01:19:31.278046  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 71/120
	I0420 01:19:32.279251  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 72/120
	I0420 01:19:33.280763  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 73/120
	I0420 01:19:34.281992  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 74/120
	I0420 01:19:35.284002  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 75/120
	I0420 01:19:36.285365  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 76/120
	I0420 01:19:37.286829  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 77/120
	I0420 01:19:38.288936  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 78/120
	I0420 01:19:39.290274  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 79/120
	I0420 01:19:40.292910  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 80/120
	I0420 01:19:41.294317  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 81/120
	I0420 01:19:42.295763  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 82/120
	I0420 01:19:43.297381  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 83/120
	I0420 01:19:44.298755  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 84/120
	I0420 01:19:45.300777  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 85/120
	I0420 01:19:46.302262  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 86/120
	I0420 01:19:47.303630  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 87/120
	I0420 01:19:48.305088  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 88/120
	I0420 01:19:49.306558  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 89/120
	I0420 01:19:50.308661  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 90/120
	I0420 01:19:51.310255  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 91/120
	I0420 01:19:52.311435  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 92/120
	I0420 01:19:53.312922  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 93/120
	I0420 01:19:54.314111  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 94/120
	I0420 01:19:55.315813  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 95/120
	I0420 01:19:56.317203  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 96/120
	I0420 01:19:57.318645  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 97/120
	I0420 01:19:58.320025  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 98/120
	I0420 01:19:59.321427  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 99/120
	I0420 01:20:00.323792  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 100/120
	I0420 01:20:01.325301  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 101/120
	I0420 01:20:02.326763  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 102/120
	I0420 01:20:03.328303  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 103/120
	I0420 01:20:04.329982  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 104/120
	I0420 01:20:05.332109  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 105/120
	I0420 01:20:06.333588  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 106/120
	I0420 01:20:07.335191  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 107/120
	I0420 01:20:08.336846  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 108/120
	I0420 01:20:09.338333  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 109/120
	I0420 01:20:10.340546  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 110/120
	I0420 01:20:11.341844  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 111/120
	I0420 01:20:12.343200  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 112/120
	I0420 01:20:13.344094  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 113/120
	I0420 01:20:14.345674  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 114/120
	I0420 01:20:15.347603  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 115/120
	I0420 01:20:16.349036  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 116/120
	I0420 01:20:17.350425  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 117/120
	I0420 01:20:18.351776  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 118/120
	I0420 01:20:19.353171  140955 main.go:141] libmachine: (embed-certs-269507) Waiting for machine to stop 119/120
	I0420 01:20:20.354209  140955 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0420 01:20:20.354271  140955 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0420 01:20:20.356293  140955 out.go:177] 
	W0420 01:20:20.357683  140955 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0420 01:20:20.357707  140955 out.go:239] * 
	* 
	W0420 01:20:20.361489  140955 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0420 01:20:20.362806  140955 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-269507 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-269507 -n embed-certs-269507
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-269507 -n embed-certs-269507: exit status 3 (18.493525011s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 01:20:38.857611  141669 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host
	E0420 01:20:38.857634  141669 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-269507" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-564860 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-564860 create -f testdata/busybox.yaml: exit status 1 (43.918273ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-564860" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-564860 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-564860 -n old-k8s-version-564860
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-564860 -n old-k8s-version-564860: exit status 6 (226.260822ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 01:19:55.603052  141421 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-564860" does not appear in /home/jenkins/minikube-integration/18703-76456/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-564860" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-564860 -n old-k8s-version-564860
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-564860 -n old-k8s-version-564860: exit status 6 (224.611231ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 01:19:55.827849  141451 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-564860" does not appear in /home/jenkins/minikube-integration/18703-76456/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-564860" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (95.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-564860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0420 01:19:57.056913   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-564860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m35.510898544s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-564860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-564860 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-564860 describe deploy/metrics-server -n kube-system: exit status 1 (44.051442ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-564860" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-564860 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-564860 -n old-k8s-version-564860
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-564860 -n old-k8s-version-564860: exit status 6 (227.585264ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 01:21:31.611088  142288 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-564860" does not appear in /home/jenkins/minikube-integration/18703-76456/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-564860" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (95.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-338118 -n no-preload-338118
E0420 01:20:16.398411   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-338118 -n no-preload-338118: exit status 3 (3.167727417s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 01:20:16.425678  141589 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.89:22: connect: no route to host
	E0420 01:20:16.425696  141589 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.89:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-338118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0420 01:20:17.537952   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-338118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153594077s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.89:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-338118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-338118 -n no-preload-338118
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-338118 -n no-preload-338118: exit status 3 (3.062196781s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 01:20:25.641690  141699 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.89:22: connect: no route to host
	E0420 01:20:25.641713  141699 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.89:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-338118" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-907988 -n default-k8s-diff-port-907988
E0420 01:20:27.814536   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-907988 -n default-k8s-diff-port-907988: exit status 3 (3.19962761s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 01:20:29.769666  141787 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0420 01:20:29.769688  141787 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-907988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-907988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154222639s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-907988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-907988 -n default-k8s-diff-port-907988
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-907988 -n default-k8s-diff-port-907988: exit status 3 (3.061629182s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 01:20:38.985609  141867 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0420 01:20:38.985624  141867 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-907988" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-269507 -n embed-certs-269507
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-269507 -n embed-certs-269507: exit status 3 (3.168124947s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 01:20:42.025662  141897 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host
	E0420 01:20:42.025685  141897 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-269507 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0420 01:20:47.874339   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
E0420 01:20:47.879649   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
E0420 01:20:47.889949   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
E0420 01:20:47.910228   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
E0420 01:20:47.950497   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
E0420 01:20:48.030834   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-269507 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153976431s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-269507 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-269507 -n embed-certs-269507
E0420 01:20:48.191960   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
E0420 01:20:48.512544   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
E0420 01:20:49.153506   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
E0420 01:20:50.434526   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-269507 -n embed-certs-269507: exit status 3 (3.061631312s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 01:20:51.241684  142011 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host
	E0420 01:20:51.241708  142011 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-269507" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (770.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-564860 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0420 01:21:33.749001   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:21:38.319226   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:21:53.180711   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
E0420 01:21:54.410304   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:21:57.792716   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/auto-831611/client.crt: no such file or directory
E0420 01:22:09.799331   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
E0420 01:22:20.418966   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
E0420 01:22:22.185662   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:22:34.141209   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
E0420 01:23:11.657374   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 01:23:31.719542   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
E0420 01:23:49.906214   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:23:54.474438   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:23:56.061473   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
E0420 01:24:17.589918   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:24:22.159750   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:24:34.707290   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 01:24:36.575702   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
E0420 01:25:04.260027   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
E0420 01:25:27.815282   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 01:25:47.873681   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
E0420 01:26:12.218601   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
E0420 01:26:15.560610   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
E0420 01:26:30.106718   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/auto-831611/client.crt: no such file or directory
E0420 01:26:39.902537   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
E0420 01:26:54.410580   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:28:11.657523   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 01:28:49.905559   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:28:54.474459   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
E0420 01:29:36.575526   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
E0420 01:30:27.815533   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-564860 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m47.35032663s)

                                                
                                                
-- stdout --
	* [old-k8s-version-564860] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-564860" primary control-plane node in "old-k8s-version-564860" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-564860" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 01:21:33.400343  142411 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:21:33.400444  142411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:21:33.400452  142411 out.go:304] Setting ErrFile to fd 2...
	I0420 01:21:33.400464  142411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:21:33.400681  142411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:21:33.401213  142411 out.go:298] Setting JSON to false
	I0420 01:21:33.402151  142411 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14640,"bootTime":1713561453,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 01:21:33.402214  142411 start.go:139] virtualization: kvm guest
	I0420 01:21:33.404200  142411 out.go:177] * [old-k8s-version-564860] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 01:21:33.405933  142411 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:21:33.407240  142411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:21:33.405946  142411 notify.go:220] Checking for updates...
	I0420 01:21:33.408693  142411 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:21:33.409906  142411 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:21:33.411155  142411 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 01:21:33.412528  142411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:21:33.414062  142411 config.go:182] Loaded profile config "old-k8s-version-564860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:21:33.414460  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:21:33.414524  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:21:33.428987  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37585
	I0420 01:21:33.429348  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:21:33.429850  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:21:33.429873  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:21:33.430178  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:21:33.430370  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:21:33.431825  142411 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0420 01:21:33.432895  142411 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:21:33.433209  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:21:33.433251  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:21:33.447157  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42815
	I0420 01:21:33.447543  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:21:33.448080  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:21:33.448123  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:21:33.448444  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:21:33.448609  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:21:33.481664  142411 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 01:21:33.482784  142411 start.go:297] selected driver: kvm2
	I0420 01:21:33.482796  142411 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-5
64860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:21:33.482903  142411 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:21:33.483572  142411 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:21:33.483646  142411 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 01:21:33.497421  142411 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 01:21:33.497790  142411 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:21:33.497854  142411 cni.go:84] Creating CNI manager for ""
	I0420 01:21:33.497869  142411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:21:33.497915  142411 start.go:340] cluster config:
	{Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:21:33.498027  142411 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:21:33.499624  142411 out.go:177] * Starting "old-k8s-version-564860" primary control-plane node in "old-k8s-version-564860" cluster
	I0420 01:21:33.500874  142411 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:21:33.500901  142411 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0420 01:21:33.500914  142411 cache.go:56] Caching tarball of preloaded images
	I0420 01:21:33.500992  142411 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 01:21:33.501007  142411 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0420 01:21:33.501116  142411 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/config.json ...
	I0420 01:21:33.501613  142411 start.go:360] acquireMachinesLock for old-k8s-version-564860: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:25:45.122924  142411 start.go:364] duration metric: took 4m11.621269498s to acquireMachinesLock for "old-k8s-version-564860"
	I0420 01:25:45.122996  142411 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:45.123018  142411 fix.go:54] fixHost starting: 
	I0420 01:25:45.123538  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:45.123581  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:45.141340  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0420 01:25:45.141873  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:45.142555  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:25:45.142592  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:45.142979  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:45.143234  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:25:45.143426  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetState
	I0420 01:25:45.145067  142411 fix.go:112] recreateIfNeeded on old-k8s-version-564860: state=Stopped err=<nil>
	I0420 01:25:45.145114  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	W0420 01:25:45.145289  142411 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:45.147498  142411 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-564860" ...
	I0420 01:25:45.148885  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .Start
	I0420 01:25:45.149115  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring networks are active...
	I0420 01:25:45.149856  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring network default is active
	I0420 01:25:45.150205  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring network mk-old-k8s-version-564860 is active
	I0420 01:25:45.150615  142411 main.go:141] libmachine: (old-k8s-version-564860) Getting domain xml...
	I0420 01:25:45.151296  142411 main.go:141] libmachine: (old-k8s-version-564860) Creating domain...
	I0420 01:25:46.465532  142411 main.go:141] libmachine: (old-k8s-version-564860) Waiting to get IP...
	I0420 01:25:46.466816  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.467306  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.467383  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.467288  143434 retry.go:31] will retry after 265.980653ms: waiting for machine to come up
	I0420 01:25:46.735144  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.735676  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.735700  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.735627  143434 retry.go:31] will retry after 254.534112ms: waiting for machine to come up
	I0420 01:25:46.992222  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.992707  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.992738  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.992621  143434 retry.go:31] will retry after 434.179962ms: waiting for machine to come up
	I0420 01:25:47.428397  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:47.428949  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:47.428987  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:47.428899  143434 retry.go:31] will retry after 533.143168ms: waiting for machine to come up
	I0420 01:25:47.963467  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:47.964008  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:47.964035  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:47.963957  143434 retry.go:31] will retry after 601.536298ms: waiting for machine to come up
	I0420 01:25:48.567922  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:48.568436  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:48.568469  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:48.568387  143434 retry.go:31] will retry after 853.809635ms: waiting for machine to come up
	I0420 01:25:49.423590  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:49.424154  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:49.424178  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:49.424099  143434 retry.go:31] will retry after 1.096859163s: waiting for machine to come up
	I0420 01:25:50.522906  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:50.523406  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:50.523436  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:50.523350  143434 retry.go:31] will retry after 983.057252ms: waiting for machine to come up
	I0420 01:25:51.508033  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:51.508557  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:51.508596  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:51.508497  143434 retry.go:31] will retry after 1.463876638s: waiting for machine to come up
	I0420 01:25:52.974032  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:52.974508  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:52.974536  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:52.974459  143434 retry.go:31] will retry after 1.859889372s: waiting for machine to come up
	I0420 01:25:54.836137  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:54.836639  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:54.836670  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:54.836584  143434 retry.go:31] will retry after 2.172259495s: waiting for machine to come up
	I0420 01:25:57.011412  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:57.011810  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:57.011840  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:57.011782  143434 retry.go:31] will retry after 2.279304552s: waiting for machine to come up
	I0420 01:25:59.292382  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:59.292905  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:59.292939  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:59.292852  143434 retry.go:31] will retry after 4.056028382s: waiting for machine to come up
	I0420 01:26:03.350591  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:03.351022  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:26:03.351047  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:26:03.350978  143434 retry.go:31] will retry after 5.38819739s: waiting for machine to come up
	I0420 01:26:08.743367  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.743867  142411 main.go:141] libmachine: (old-k8s-version-564860) Found IP for machine: 192.168.61.91
	I0420 01:26:08.743896  142411 main.go:141] libmachine: (old-k8s-version-564860) Reserving static IP address...
	I0420 01:26:08.743914  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has current primary IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.744294  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "old-k8s-version-564860", mac: "52:54:00:9d:63:09", ip: "192.168.61.91"} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.744324  142411 main.go:141] libmachine: (old-k8s-version-564860) Reserved static IP address: 192.168.61.91
	I0420 01:26:08.744344  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | skip adding static IP to network mk-old-k8s-version-564860 - found existing host DHCP lease matching {name: "old-k8s-version-564860", mac: "52:54:00:9d:63:09", ip: "192.168.61.91"}
	I0420 01:26:08.744368  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Getting to WaitForSSH function...
	I0420 01:26:08.744387  142411 main.go:141] libmachine: (old-k8s-version-564860) Waiting for SSH to be available...
	I0420 01:26:08.746714  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.747119  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.747155  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.747278  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Using SSH client type: external
	I0420 01:26:08.747314  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa (-rw-------)
	I0420 01:26:08.747346  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:26:08.747359  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | About to run SSH command:
	I0420 01:26:08.747373  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | exit 0
	I0420 01:26:08.877633  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | SSH cmd err, output: <nil>: 
	I0420 01:26:08.878016  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetConfigRaw
	I0420 01:26:08.878715  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:08.881556  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.881982  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.882028  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.882326  142411 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/config.json ...
	I0420 01:26:08.882586  142411 machine.go:94] provisionDockerMachine start ...
	I0420 01:26:08.882613  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:08.882853  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:08.885133  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.885479  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.885510  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.885647  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:08.885843  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:08.886029  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:08.886192  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:08.886403  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:08.886642  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:08.886657  142411 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:26:09.006625  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:26:09.006655  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.006914  142411 buildroot.go:166] provisioning hostname "old-k8s-version-564860"
	I0420 01:26:09.006940  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.007144  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.010016  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.010349  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.010374  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.010597  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.010841  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.011040  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.011235  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.011439  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.011682  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.011718  142411 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-564860 && echo "old-k8s-version-564860" | sudo tee /etc/hostname
	I0420 01:26:09.155581  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-564860
	
	I0420 01:26:09.155612  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.158583  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.159021  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.159068  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.159285  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.159519  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.159747  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.159933  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.160128  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.160362  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.160390  142411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-564860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-564860/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-564860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:26:09.288804  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:26:09.288834  142411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:26:09.288856  142411 buildroot.go:174] setting up certificates
	I0420 01:26:09.288867  142411 provision.go:84] configureAuth start
	I0420 01:26:09.288877  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.289286  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:09.292454  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.292884  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.292923  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.293076  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.295234  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.295537  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.295565  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.295675  142411 provision.go:143] copyHostCerts
	I0420 01:26:09.295747  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:26:09.295758  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:26:09.295811  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:26:09.295936  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:26:09.295951  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:26:09.295981  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:26:09.296063  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:26:09.296075  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:26:09.296095  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:26:09.296154  142411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-564860 san=[127.0.0.1 192.168.61.91 localhost minikube old-k8s-version-564860]
	I0420 01:26:09.436313  142411 provision.go:177] copyRemoteCerts
	I0420 01:26:09.436373  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:26:09.436401  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.439316  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.439700  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.439743  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.439856  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.440057  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.440226  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.440360  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:09.529141  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:26:09.558376  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0420 01:26:09.586393  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:26:09.615274  142411 provision.go:87] duration metric: took 326.393984ms to configureAuth
	I0420 01:26:09.615300  142411 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:26:09.615501  142411 config.go:182] Loaded profile config "old-k8s-version-564860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:26:09.615590  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.618470  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.618905  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.618938  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.619141  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.619325  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.619505  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.619662  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.619862  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.620073  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.620091  142411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:26:09.924929  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:26:09.924958  142411 machine.go:97] duration metric: took 1.042352034s to provisionDockerMachine
	I0420 01:26:09.924973  142411 start.go:293] postStartSetup for "old-k8s-version-564860" (driver="kvm2")
	I0420 01:26:09.924985  142411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:26:09.925021  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:09.925441  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:26:09.925485  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.927985  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.928377  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.928407  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.928565  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.928770  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.928944  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.929114  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.020189  142411 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:26:10.025578  142411 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:26:10.025607  142411 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:26:10.025707  142411 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:26:10.025795  142411 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:26:10.025888  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:26:10.038138  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:10.065063  142411 start.go:296] duration metric: took 140.07164ms for postStartSetup
	I0420 01:26:10.065111  142411 fix.go:56] duration metric: took 24.94209431s for fixHost
	I0420 01:26:10.065139  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.068099  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.068493  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.068544  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.068697  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.068916  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.069114  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.069255  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.069455  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:10.069662  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:10.069678  142411 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0420 01:26:10.190955  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576370.174630368
	
	I0420 01:26:10.190984  142411 fix.go:216] guest clock: 1713576370.174630368
	I0420 01:26:10.190994  142411 fix.go:229] Guest: 2024-04-20 01:26:10.174630368 +0000 UTC Remote: 2024-04-20 01:26:10.065116719 +0000 UTC m=+276.709087933 (delta=109.513649ms)
	I0420 01:26:10.191036  142411 fix.go:200] guest clock delta is within tolerance: 109.513649ms
	I0420 01:26:10.191044  142411 start.go:83] releasing machines lock for "old-k8s-version-564860", held for 25.068071712s
	I0420 01:26:10.191074  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.191368  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:10.194872  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.195333  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.195365  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.195510  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196060  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196253  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196331  142411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:26:10.196375  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.196439  142411 ssh_runner.go:195] Run: cat /version.json
	I0420 01:26:10.196467  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.199156  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199522  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199557  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.199572  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199760  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.199975  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.200098  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.200137  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.200165  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.200326  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.200700  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.200857  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.200992  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.201150  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.283430  142411 ssh_runner.go:195] Run: systemctl --version
	I0420 01:26:10.310703  142411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:26:10.462457  142411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:26:10.470897  142411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:26:10.470993  142411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:26:10.489867  142411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:26:10.489899  142411 start.go:494] detecting cgroup driver to use...
	I0420 01:26:10.489996  142411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:26:10.512741  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:26:10.530013  142411 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:26:10.530077  142411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:26:10.548567  142411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:26:10.565645  142411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:26:10.693390  142411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:26:10.878889  142411 docker.go:233] disabling docker service ...
	I0420 01:26:10.878973  142411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:26:10.901233  142411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:26:10.915219  142411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:26:11.053815  142411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:26:11.201766  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:26:11.218569  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:26:11.240543  142411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0420 01:26:11.240604  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.253384  142411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:26:11.253460  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.268703  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.281575  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.296477  142411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:26:11.312458  142411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:26:11.328008  142411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:26:11.328076  142411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:26:11.349027  142411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:26:11.362064  142411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:11.500624  142411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:26:11.665985  142411 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:26:11.666061  142411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:26:11.672929  142411 start.go:562] Will wait 60s for crictl version
	I0420 01:26:11.673006  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:11.678398  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:26:11.727572  142411 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:26:11.727663  142411 ssh_runner.go:195] Run: crio --version
	I0420 01:26:11.760504  142411 ssh_runner.go:195] Run: crio --version
	I0420 01:26:11.803463  142411 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0420 01:26:11.804782  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:11.807755  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:11.808135  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:11.808177  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:11.808396  142411 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0420 01:26:11.813653  142411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:11.830618  142411 kubeadm.go:877] updating cluster {Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:26:11.830793  142411 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:26:11.830874  142411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:11.889149  142411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:26:11.889218  142411 ssh_runner.go:195] Run: which lz4
	I0420 01:26:11.894461  142411 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0420 01:26:11.900427  142411 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:26:11.900456  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0420 01:26:14.031960  142411 crio.go:462] duration metric: took 2.137532848s to copy over tarball
	I0420 01:26:14.032043  142411 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:26:17.581625  142411 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.549548059s)
	I0420 01:26:17.581660  142411 crio.go:469] duration metric: took 3.549666471s to extract the tarball
	I0420 01:26:17.581672  142411 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:26:17.633172  142411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:17.679514  142411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:26:17.679544  142411 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0420 01:26:17.679710  142411 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.679940  142411 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.680051  142411 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.680061  142411 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.680225  142411 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.680266  142411 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0420 01:26:17.680442  142411 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.680516  142411 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.682336  142411 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.682425  142411 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0420 01:26:17.682428  142411 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.682462  142411 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.682341  142411 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.682512  142411 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.682952  142411 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.682955  142411 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.846602  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.850673  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.866812  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.871983  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.876346  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0420 01:26:17.876745  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.881269  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.985788  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.997662  142411 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0420 01:26:17.997709  142411 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0420 01:26:17.997716  142411 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.997751  142411 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.997778  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:17.997797  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.071610  142411 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0420 01:26:18.071682  142411 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:18.071705  142411 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0420 01:26:18.071741  142411 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:18.071760  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.071793  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.085631  142411 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0420 01:26:18.085689  142411 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0420 01:26:18.085748  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.087239  142411 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0420 01:26:18.087288  142411 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:18.087362  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.094891  142411 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0420 01:26:18.094940  142411 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:18.094989  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.232524  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:18.232595  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:18.232613  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0420 01:26:18.232649  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0420 01:26:18.232595  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:18.232682  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:18.232710  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:18.408724  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0420 01:26:18.408791  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0420 01:26:18.410041  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0420 01:26:18.410136  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0420 01:26:18.424042  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0420 01:26:18.428203  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0420 01:26:18.428295  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0420 01:26:18.450170  142411 cache_images.go:92] duration metric: took 770.600266ms to LoadCachedImages
	W0420 01:26:18.450288  142411 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0420 01:26:18.450305  142411 kubeadm.go:928] updating node { 192.168.61.91 8443 v1.20.0 crio true true} ...
	I0420 01:26:18.450428  142411 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-564860 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:26:18.450522  142411 ssh_runner.go:195] Run: crio config
	I0420 01:26:18.503362  142411 cni.go:84] Creating CNI manager for ""
	I0420 01:26:18.503407  142411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:18.503427  142411 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:26:18.503463  142411 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-564860 NodeName:old-k8s-version-564860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0420 01:26:18.503671  142411 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-564860"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:26:18.503745  142411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0420 01:26:18.516393  142411 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:26:18.516475  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:26:18.529038  142411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0420 01:26:18.550442  142411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:26:18.572012  142411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0420 01:26:18.595682  142411 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I0420 01:26:18.602036  142411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:18.622226  142411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:18.774466  142411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:26:18.795074  142411 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860 for IP: 192.168.61.91
	I0420 01:26:18.795104  142411 certs.go:194] generating shared ca certs ...
	I0420 01:26:18.795125  142411 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:18.795301  142411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:26:18.795342  142411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:26:18.795352  142411 certs.go:256] generating profile certs ...
	I0420 01:26:18.795433  142411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/client.key
	I0420 01:26:18.795487  142411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key.d235183f
	I0420 01:26:18.795524  142411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key
	I0420 01:26:18.795645  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:26:18.795675  142411 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:26:18.795685  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:26:18.795706  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:26:18.795735  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:26:18.795765  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:26:18.795828  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:18.796607  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:26:18.845581  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:26:18.891065  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:26:18.933536  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:26:18.977381  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0420 01:26:19.009816  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:26:19.042053  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:26:19.090614  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:26:19.119554  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:26:19.147545  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:26:19.177775  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:26:19.211008  142411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:26:19.234399  142411 ssh_runner.go:195] Run: openssl version
	I0420 01:26:19.242808  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:26:19.256132  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.261681  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.261739  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.270546  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:26:19.284112  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:26:19.296998  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.302497  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.302551  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.310883  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:26:19.325130  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:26:19.338964  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.344915  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.344986  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.351926  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:26:19.366428  142411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:26:19.372391  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:26:19.379606  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:26:19.386698  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:26:19.395102  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:26:19.401981  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:26:19.409477  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:26:19.416444  142411 kubeadm.go:391] StartCluster: {Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:26:19.416557  142411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:26:19.416600  142411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:19.460782  142411 cri.go:89] found id: ""
	I0420 01:26:19.460884  142411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:26:19.473812  142411 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:26:19.473832  142411 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:26:19.473838  142411 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:26:19.473899  142411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:26:19.486686  142411 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:26:19.487757  142411 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-564860" does not appear in /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:26:19.488411  142411 kubeconfig.go:62] /home/jenkins/minikube-integration/18703-76456/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-564860" cluster setting kubeconfig missing "old-k8s-version-564860" context setting]
	I0420 01:26:19.489438  142411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:19.491237  142411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:26:19.503483  142411 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.91
	I0420 01:26:19.503519  142411 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:26:19.503530  142411 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:26:19.503597  142411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:19.546350  142411 cri.go:89] found id: ""
	I0420 01:26:19.546438  142411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:26:19.568177  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:26:19.580545  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:26:19.580573  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:26:19.580658  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:26:19.592945  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:26:19.593010  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:26:19.605598  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:26:19.617261  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:26:19.617346  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:26:19.629242  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:26:19.640143  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:26:19.640211  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:26:19.654226  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:26:19.666207  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:26:19.666275  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:26:19.678899  142411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:26:19.694374  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:19.845435  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:20.619142  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:20.891265  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:21.020834  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:21.124545  142411 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:26:21.124652  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:21.625462  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:22.125171  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:22.625565  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:23.125077  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:23.625392  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.125446  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.625035  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:25.125592  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:25.624718  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:26.124803  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:26.625420  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:27.125162  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:27.625475  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:28.125637  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:28.625781  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.125145  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.625647  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:30.125081  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:30.625404  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:31.124753  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:31.625565  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:32.124750  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:32.624841  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:33.125120  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:33.625596  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:34.124972  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:34.624791  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:35.125630  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:35.624815  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.125677  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.625631  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:37.125592  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:37.624883  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:38.124924  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:38.624766  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:39.125330  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:39.624953  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.125409  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.625125  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:41.125460  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:41.625041  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:42.125103  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:42.624948  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:43.125237  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:43.625155  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:44.124986  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:44.624957  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.125834  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.625359  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:46.125706  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:46.625115  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:47.125204  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:47.625746  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:48.124803  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:48.624957  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:49.125441  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:49.625078  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.124787  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.624817  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:51.125211  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:51.625408  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:52.124903  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:52.624826  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:53.124728  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:53.625614  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.125487  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.625414  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:55.125150  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:55.624831  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:56.125438  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:56.625450  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.125591  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.625757  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:58.124963  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:58.625549  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:59.125177  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:59.624704  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:00.125709  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:00.625346  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.124849  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.624947  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:02.125407  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:02.625704  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:03.125695  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:03.625423  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:04.124806  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:04.625232  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.124917  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.624983  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:06.124851  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:06.625029  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:07.125554  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:07.625163  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:08.125455  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:08.625100  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:09.125395  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:09.625454  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.125615  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.624892  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:11.125366  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:11.625074  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:12.125165  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:12.625629  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:13.124824  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:13.625040  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:14.125511  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:14.624890  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.125622  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.625393  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:16.125215  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:16.625561  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:17.125263  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:17.624772  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:18.125597  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:18.624948  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:19.124956  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:19.625579  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:20.124827  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:20.625212  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:21.125476  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:21.125553  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:21.174633  142411 cri.go:89] found id: ""
	I0420 01:27:21.174668  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.174679  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:21.174686  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:21.174767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:21.218230  142411 cri.go:89] found id: ""
	I0420 01:27:21.218263  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.218275  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:21.218284  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:21.218369  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:21.258886  142411 cri.go:89] found id: ""
	I0420 01:27:21.258916  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.258926  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:21.258932  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:21.259003  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:21.306725  142411 cri.go:89] found id: ""
	I0420 01:27:21.306758  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.306769  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:21.306777  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:21.306843  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:21.349049  142411 cri.go:89] found id: ""
	I0420 01:27:21.349086  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.349098  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:21.349106  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:21.349174  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:21.392312  142411 cri.go:89] found id: ""
	I0420 01:27:21.392338  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.392346  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:21.392352  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:21.392425  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:21.434121  142411 cri.go:89] found id: ""
	I0420 01:27:21.434148  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.434156  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:21.434162  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:21.434210  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:21.473728  142411 cri.go:89] found id: ""
	I0420 01:27:21.473754  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.473762  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:21.473772  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:21.473785  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:21.537607  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:21.537648  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:21.554563  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:21.554604  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:21.674778  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:21.674803  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:21.674829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:21.740625  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:21.740666  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:24.284890  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:24.301486  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:24.301571  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:24.340987  142411 cri.go:89] found id: ""
	I0420 01:27:24.341012  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.341021  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:24.341026  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:24.341102  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:24.379983  142411 cri.go:89] found id: ""
	I0420 01:27:24.380014  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.380024  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:24.380029  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:24.380113  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:24.438700  142411 cri.go:89] found id: ""
	I0420 01:27:24.438729  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.438739  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:24.438745  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:24.438795  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:24.487761  142411 cri.go:89] found id: ""
	I0420 01:27:24.487793  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.487802  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:24.487808  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:24.487870  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:24.529408  142411 cri.go:89] found id: ""
	I0420 01:27:24.529439  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.529448  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:24.529453  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:24.529523  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:24.572782  142411 cri.go:89] found id: ""
	I0420 01:27:24.572817  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.572831  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:24.572841  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:24.572910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:24.620651  142411 cri.go:89] found id: ""
	I0420 01:27:24.620684  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.620696  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:24.620704  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:24.620769  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:24.659481  142411 cri.go:89] found id: ""
	I0420 01:27:24.659513  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.659525  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:24.659537  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:24.659552  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:24.714483  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:24.714517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:24.730279  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:24.730316  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:24.804883  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:24.804909  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:24.804926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:24.879557  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:24.879602  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:27.431026  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:27.448112  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:27.448176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:27.494959  142411 cri.go:89] found id: ""
	I0420 01:27:27.494988  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.494999  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:27.495007  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:27.495075  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:27.532023  142411 cri.go:89] found id: ""
	I0420 01:27:27.532055  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.532066  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:27.532075  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:27.532151  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:27.578551  142411 cri.go:89] found id: ""
	I0420 01:27:27.578600  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.578613  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:27.578621  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:27.578692  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:27.618248  142411 cri.go:89] found id: ""
	I0420 01:27:27.618277  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.618288  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:27.618296  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:27.618363  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:27.655682  142411 cri.go:89] found id: ""
	I0420 01:27:27.655714  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.655723  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:27.655729  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:27.655787  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:27.696355  142411 cri.go:89] found id: ""
	I0420 01:27:27.696389  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.696400  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:27.696408  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:27.696478  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:27.735354  142411 cri.go:89] found id: ""
	I0420 01:27:27.735378  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.735396  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:27.735402  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:27.735460  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:27.775234  142411 cri.go:89] found id: ""
	I0420 01:27:27.775261  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.775269  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:27.775277  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:27.775294  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:27.789970  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:27.790005  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:27.873345  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:27.873371  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:27.873387  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:27.952309  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:27.952353  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:28.003746  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:28.003792  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:30.555691  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:30.570962  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:30.571041  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:30.613185  142411 cri.go:89] found id: ""
	I0420 01:27:30.613218  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.613227  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:30.613233  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:30.613291  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:30.654494  142411 cri.go:89] found id: ""
	I0420 01:27:30.654520  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.654529  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:30.654535  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:30.654600  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:30.702605  142411 cri.go:89] found id: ""
	I0420 01:27:30.702634  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.702646  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:30.702653  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:30.702719  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:30.742072  142411 cri.go:89] found id: ""
	I0420 01:27:30.742104  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.742115  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:30.742123  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:30.742191  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:30.793199  142411 cri.go:89] found id: ""
	I0420 01:27:30.793232  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.793244  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:30.793252  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:30.793340  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:30.832978  142411 cri.go:89] found id: ""
	I0420 01:27:30.833019  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.833034  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:30.833044  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:30.833126  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:30.875606  142411 cri.go:89] found id: ""
	I0420 01:27:30.875641  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.875655  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:30.875662  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:30.875729  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:30.917288  142411 cri.go:89] found id: ""
	I0420 01:27:30.917335  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.917348  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:30.917360  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:30.917375  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:30.996446  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:30.996469  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:30.996485  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:31.080494  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:31.080543  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:31.141226  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:31.141260  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:31.212808  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:31.212845  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:33.728927  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:33.745749  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:33.745835  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:33.788813  142411 cri.go:89] found id: ""
	I0420 01:27:33.788845  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.788859  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:33.788868  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:33.788936  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:33.834918  142411 cri.go:89] found id: ""
	I0420 01:27:33.834948  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.834957  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:33.834963  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:33.835026  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:33.873928  142411 cri.go:89] found id: ""
	I0420 01:27:33.873960  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.873972  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:33.873977  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:33.874027  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:33.921462  142411 cri.go:89] found id: ""
	I0420 01:27:33.921497  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.921510  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:33.921519  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:33.921606  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:33.962280  142411 cri.go:89] found id: ""
	I0420 01:27:33.962308  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.962320  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:33.962329  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:33.962390  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:34.002582  142411 cri.go:89] found id: ""
	I0420 01:27:34.002616  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.002627  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:34.002635  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:34.002707  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:34.047383  142411 cri.go:89] found id: ""
	I0420 01:27:34.047410  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.047421  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:34.047428  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:34.047489  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:34.088296  142411 cri.go:89] found id: ""
	I0420 01:27:34.088341  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.088352  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:34.088364  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:34.088381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:34.180338  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:34.180380  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:34.224386  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:34.224422  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:34.278451  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:34.278488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:34.294377  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:34.294409  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:34.377115  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:36.878000  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:36.896875  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:36.896953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:36.953915  142411 cri.go:89] found id: ""
	I0420 01:27:36.953954  142411 logs.go:276] 0 containers: []
	W0420 01:27:36.953968  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:36.953977  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:36.954056  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:36.998223  142411 cri.go:89] found id: ""
	I0420 01:27:36.998250  142411 logs.go:276] 0 containers: []
	W0420 01:27:36.998260  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:36.998268  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:36.998337  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:37.069299  142411 cri.go:89] found id: ""
	I0420 01:27:37.069346  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.069358  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:37.069366  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:37.069436  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:37.112068  142411 cri.go:89] found id: ""
	I0420 01:27:37.112100  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.112112  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:37.112119  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:37.112175  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:37.155883  142411 cri.go:89] found id: ""
	I0420 01:27:37.155913  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.155924  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:37.155933  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:37.156006  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:37.200979  142411 cri.go:89] found id: ""
	I0420 01:27:37.201007  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.201018  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:37.201026  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:37.201091  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:37.241639  142411 cri.go:89] found id: ""
	I0420 01:27:37.241667  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.241678  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:37.241686  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:37.241748  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:37.281845  142411 cri.go:89] found id: ""
	I0420 01:27:37.281883  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.281894  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:37.281907  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:37.281923  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:37.327428  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:37.327463  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:37.385213  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:37.385248  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:37.400158  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:37.400190  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:37.476662  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:37.476687  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:37.476700  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:40.075888  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:40.091313  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:40.091389  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:40.134013  142411 cri.go:89] found id: ""
	I0420 01:27:40.134039  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.134048  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:40.134053  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:40.134136  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:40.182108  142411 cri.go:89] found id: ""
	I0420 01:27:40.182140  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.182151  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:40.182158  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:40.182222  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:40.225406  142411 cri.go:89] found id: ""
	I0420 01:27:40.225438  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.225447  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:40.225453  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:40.225539  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:40.267599  142411 cri.go:89] found id: ""
	I0420 01:27:40.267627  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.267636  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:40.267645  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:40.267790  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:40.309385  142411 cri.go:89] found id: ""
	I0420 01:27:40.309418  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.309439  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:40.309448  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:40.309525  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:40.351947  142411 cri.go:89] found id: ""
	I0420 01:27:40.351980  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.351993  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:40.352003  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:40.352079  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:40.395583  142411 cri.go:89] found id: ""
	I0420 01:27:40.395614  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.395623  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:40.395629  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:40.395692  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:40.441348  142411 cri.go:89] found id: ""
	I0420 01:27:40.441397  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.441412  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:40.441426  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:40.441445  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:40.498231  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:40.498268  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:40.514550  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:40.514578  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:40.593580  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:40.593614  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:40.593631  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:40.671736  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:40.671778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:43.224892  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:43.240876  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:43.240939  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:43.281583  142411 cri.go:89] found id: ""
	I0420 01:27:43.281621  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.281634  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:43.281643  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:43.281705  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:43.321079  142411 cri.go:89] found id: ""
	I0420 01:27:43.321115  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.321125  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:43.321132  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:43.321277  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:43.365827  142411 cri.go:89] found id: ""
	I0420 01:27:43.365855  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.365864  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:43.365870  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:43.365921  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:43.404317  142411 cri.go:89] found id: ""
	I0420 01:27:43.404349  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.404361  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:43.404370  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:43.404443  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:43.449268  142411 cri.go:89] found id: ""
	I0420 01:27:43.449299  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.449323  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:43.449331  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:43.449408  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:43.487782  142411 cri.go:89] found id: ""
	I0420 01:27:43.487829  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.487837  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:43.487844  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:43.487909  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:43.526650  142411 cri.go:89] found id: ""
	I0420 01:27:43.526677  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.526688  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:43.526695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:43.526755  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:43.565288  142411 cri.go:89] found id: ""
	I0420 01:27:43.565328  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.565340  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:43.565352  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:43.565368  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:43.618013  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:43.618046  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:43.634064  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:43.634101  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:43.710633  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:43.710663  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:43.710679  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:43.796658  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:43.796709  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:46.352329  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:46.366848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:46.366935  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:46.413643  142411 cri.go:89] found id: ""
	I0420 01:27:46.413676  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.413687  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:46.413695  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:46.413762  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:46.457976  142411 cri.go:89] found id: ""
	I0420 01:27:46.458002  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.458011  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:46.458020  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:46.458086  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:46.500291  142411 cri.go:89] found id: ""
	I0420 01:27:46.500317  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.500328  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:46.500334  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:46.500398  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:46.541279  142411 cri.go:89] found id: ""
	I0420 01:27:46.541331  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.541343  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:46.541359  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:46.541442  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:46.585613  142411 cri.go:89] found id: ""
	I0420 01:27:46.585642  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.585654  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:46.585661  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:46.585726  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:46.634400  142411 cri.go:89] found id: ""
	I0420 01:27:46.634430  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.634441  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:46.634450  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:46.634534  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:46.676276  142411 cri.go:89] found id: ""
	I0420 01:27:46.676305  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.676313  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:46.676320  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:46.676380  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:46.719323  142411 cri.go:89] found id: ""
	I0420 01:27:46.719356  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.719369  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:46.719381  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:46.719398  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:46.799735  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:46.799765  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:46.799790  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:46.878323  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:46.878371  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:46.931870  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:46.931902  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:46.983217  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:46.983250  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:49.500147  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:49.517380  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:49.517461  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:49.561300  142411 cri.go:89] found id: ""
	I0420 01:27:49.561347  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.561358  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:49.561365  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:49.561432  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:49.604569  142411 cri.go:89] found id: ""
	I0420 01:27:49.604594  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.604608  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:49.604614  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:49.604664  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:49.644952  142411 cri.go:89] found id: ""
	I0420 01:27:49.644983  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.644999  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:49.645006  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:49.645071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:49.694719  142411 cri.go:89] found id: ""
	I0420 01:27:49.694749  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.694757  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:49.694764  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:49.694815  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:49.743821  142411 cri.go:89] found id: ""
	I0420 01:27:49.743849  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.743857  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:49.743865  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:49.743936  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:49.789125  142411 cri.go:89] found id: ""
	I0420 01:27:49.789152  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.789161  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:49.789167  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:49.789233  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:49.828794  142411 cri.go:89] found id: ""
	I0420 01:27:49.828829  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.828841  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:49.828848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:49.828913  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:49.873335  142411 cri.go:89] found id: ""
	I0420 01:27:49.873366  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.873375  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:49.873385  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:49.873397  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:49.930590  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:49.930632  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:49.946850  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:49.946889  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:50.039200  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:50.039220  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:50.039236  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:50.122067  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:50.122118  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:52.664342  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:52.682978  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:52.683061  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:52.733806  142411 cri.go:89] found id: ""
	I0420 01:27:52.733836  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.733848  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:52.733855  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:52.733921  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:52.785977  142411 cri.go:89] found id: ""
	I0420 01:27:52.786008  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.786020  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:52.786027  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:52.786092  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:52.826957  142411 cri.go:89] found id: ""
	I0420 01:27:52.826987  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.826995  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:52.827001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:52.827056  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:52.876208  142411 cri.go:89] found id: ""
	I0420 01:27:52.876251  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.876265  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:52.876276  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:52.876354  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:52.918629  142411 cri.go:89] found id: ""
	I0420 01:27:52.918666  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.918679  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:52.918687  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:52.918767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:52.967604  142411 cri.go:89] found id: ""
	I0420 01:27:52.967646  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.967655  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:52.967661  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:52.967729  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:53.010948  142411 cri.go:89] found id: ""
	I0420 01:27:53.010975  142411 logs.go:276] 0 containers: []
	W0420 01:27:53.010983  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:53.010988  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:53.011039  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:53.055569  142411 cri.go:89] found id: ""
	I0420 01:27:53.055594  142411 logs.go:276] 0 containers: []
	W0420 01:27:53.055611  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:53.055620  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:53.055633  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:53.071038  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:53.071067  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:53.151334  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:53.151364  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:53.151381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:53.238509  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:53.238553  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:53.284898  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:53.284945  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:55.843065  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:55.856928  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:55.857001  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:55.903058  142411 cri.go:89] found id: ""
	I0420 01:27:55.903092  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.903103  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:55.903111  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:55.903170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:55.944369  142411 cri.go:89] found id: ""
	I0420 01:27:55.944402  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.944414  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:55.944421  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:55.944474  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:55.983485  142411 cri.go:89] found id: ""
	I0420 01:27:55.983510  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.983517  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:55.983523  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:55.983571  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:56.021931  142411 cri.go:89] found id: ""
	I0420 01:27:56.021956  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.021964  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:56.021970  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:56.022019  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:56.066671  142411 cri.go:89] found id: ""
	I0420 01:27:56.066705  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.066717  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:56.066724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:56.066788  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:56.107724  142411 cri.go:89] found id: ""
	I0420 01:27:56.107783  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.107794  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:56.107800  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:56.107854  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:56.149201  142411 cri.go:89] found id: ""
	I0420 01:27:56.149234  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.149246  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:56.149255  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:56.149328  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:56.189580  142411 cri.go:89] found id: ""
	I0420 01:27:56.189621  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.189633  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:56.189645  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:56.189661  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:56.243425  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:56.243462  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:56.261043  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:56.261079  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:56.341944  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:56.341967  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:56.341980  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:56.423252  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:56.423294  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:58.968894  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:58.984559  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:58.984648  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:59.021603  142411 cri.go:89] found id: ""
	I0420 01:27:59.021634  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.021655  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:59.021666  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:59.021756  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:59.061592  142411 cri.go:89] found id: ""
	I0420 01:27:59.061626  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.061642  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:59.061649  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:59.061701  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:59.101956  142411 cri.go:89] found id: ""
	I0420 01:27:59.101986  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.101996  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:59.102003  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:59.102072  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:59.141104  142411 cri.go:89] found id: ""
	I0420 01:27:59.141136  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.141145  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:59.141151  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:59.141221  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:59.188973  142411 cri.go:89] found id: ""
	I0420 01:27:59.189005  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.189014  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:59.189022  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:59.189107  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:59.232598  142411 cri.go:89] found id: ""
	I0420 01:27:59.232632  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.232641  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:59.232647  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:59.232704  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:59.272623  142411 cri.go:89] found id: ""
	I0420 01:27:59.272660  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.272669  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:59.272675  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:59.272739  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:59.309951  142411 cri.go:89] found id: ""
	I0420 01:27:59.309977  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.309984  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:59.309994  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:59.310005  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:59.366589  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:59.366626  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:59.382724  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:59.382756  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:59.461072  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:59.461102  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:59.461122  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:59.544736  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:59.544769  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:02.089118  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:02.105402  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:02.105483  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:02.144665  142411 cri.go:89] found id: ""
	I0420 01:28:02.144691  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.144700  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:02.144706  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:02.144759  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:02.187471  142411 cri.go:89] found id: ""
	I0420 01:28:02.187498  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.187508  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:02.187515  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:02.187576  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:02.229206  142411 cri.go:89] found id: ""
	I0420 01:28:02.229233  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.229241  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:02.229247  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:02.229335  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:02.279425  142411 cri.go:89] found id: ""
	I0420 01:28:02.279464  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.279478  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:02.279488  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:02.279577  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:02.323033  142411 cri.go:89] found id: ""
	I0420 01:28:02.323066  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.323082  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:02.323090  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:02.323155  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:02.360121  142411 cri.go:89] found id: ""
	I0420 01:28:02.360158  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.360170  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:02.360178  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:02.360244  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:02.398756  142411 cri.go:89] found id: ""
	I0420 01:28:02.398786  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.398797  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:02.398804  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:02.398867  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:02.437982  142411 cri.go:89] found id: ""
	I0420 01:28:02.438010  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.438018  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:02.438028  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:02.438041  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:02.489396  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:02.489434  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:02.506764  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:02.506796  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:02.591894  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:02.591915  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:02.591929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:02.675241  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:02.675281  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:05.224296  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:05.238522  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:05.238593  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:05.278495  142411 cri.go:89] found id: ""
	I0420 01:28:05.278529  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.278540  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:05.278549  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:05.278621  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:05.318096  142411 cri.go:89] found id: ""
	I0420 01:28:05.318122  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.318130  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:05.318136  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:05.318196  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:05.358607  142411 cri.go:89] found id: ""
	I0420 01:28:05.358636  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.358653  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:05.358658  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:05.358749  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:05.417163  142411 cri.go:89] found id: ""
	I0420 01:28:05.417199  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.417211  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:05.417218  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:05.417284  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:05.468566  142411 cri.go:89] found id: ""
	I0420 01:28:05.468599  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.468610  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:05.468619  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:05.468691  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:05.514005  142411 cri.go:89] found id: ""
	I0420 01:28:05.514037  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.514047  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:05.514055  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:05.514112  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:05.554972  142411 cri.go:89] found id: ""
	I0420 01:28:05.555001  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.555012  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:05.555020  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:05.555083  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:05.596736  142411 cri.go:89] found id: ""
	I0420 01:28:05.596764  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.596773  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:05.596787  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:05.596800  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:05.649680  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:05.649719  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:05.667583  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:05.667614  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:05.743886  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:05.743922  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:05.743939  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:05.827827  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:05.827863  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:08.384615  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:08.401190  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:08.403071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:08.445453  142411 cri.go:89] found id: ""
	I0420 01:28:08.445486  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.445497  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:08.445505  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:08.445573  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:08.487598  142411 cri.go:89] found id: ""
	I0420 01:28:08.487636  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.487649  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:08.487657  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:08.487727  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:08.531416  142411 cri.go:89] found id: ""
	I0420 01:28:08.531445  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.531457  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:08.531465  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:08.531526  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:08.574964  142411 cri.go:89] found id: ""
	I0420 01:28:08.575000  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.575012  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:08.575020  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:08.575075  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:08.612644  142411 cri.go:89] found id: ""
	I0420 01:28:08.612679  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.612688  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:08.612695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:08.612748  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:08.651775  142411 cri.go:89] found id: ""
	I0420 01:28:08.651800  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.651811  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:08.651817  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:08.651869  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:08.692869  142411 cri.go:89] found id: ""
	I0420 01:28:08.692894  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.692902  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:08.692908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:08.692957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:08.731765  142411 cri.go:89] found id: ""
	I0420 01:28:08.731794  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.731805  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:08.731817  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:08.731836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:08.747401  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:08.747445  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:08.831069  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:08.831091  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:08.831110  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:08.919053  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:08.919095  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:08.965814  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:08.965854  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:11.518303  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:11.535213  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:11.535294  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:11.577182  142411 cri.go:89] found id: ""
	I0420 01:28:11.577214  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.577223  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:11.577229  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:11.577289  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:11.615023  142411 cri.go:89] found id: ""
	I0420 01:28:11.615055  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.615064  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:11.615070  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:11.615138  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:11.654062  142411 cri.go:89] found id: ""
	I0420 01:28:11.654089  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.654097  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:11.654104  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:11.654170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:11.700846  142411 cri.go:89] found id: ""
	I0420 01:28:11.700875  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.700885  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:11.700892  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:11.700966  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:11.743061  142411 cri.go:89] found id: ""
	I0420 01:28:11.743089  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.743100  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:11.743109  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:11.743175  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:11.783651  142411 cri.go:89] found id: ""
	I0420 01:28:11.783687  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.783698  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:11.783706  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:11.783781  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:11.827099  142411 cri.go:89] found id: ""
	I0420 01:28:11.827130  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.827139  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:11.827144  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:11.827197  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:11.867476  142411 cri.go:89] found id: ""
	I0420 01:28:11.867510  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.867523  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:11.867535  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:11.867554  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:11.920211  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:11.920246  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:11.937632  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:11.937670  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:12.014917  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:12.014940  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:12.014955  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:12.096549  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:12.096586  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:14.653783  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:14.667893  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:14.667955  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:14.710098  142411 cri.go:89] found id: ""
	I0420 01:28:14.710153  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.710164  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:14.710172  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:14.710240  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:14.750891  142411 cri.go:89] found id: ""
	I0420 01:28:14.750920  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.750929  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:14.750939  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:14.751010  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:14.794062  142411 cri.go:89] found id: ""
	I0420 01:28:14.794103  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.794127  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:14.794135  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:14.794204  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:14.834333  142411 cri.go:89] found id: ""
	I0420 01:28:14.834363  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.834375  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:14.834383  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:14.834446  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:14.874114  142411 cri.go:89] found id: ""
	I0420 01:28:14.874148  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.874160  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:14.874168  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:14.874238  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:14.912685  142411 cri.go:89] found id: ""
	I0420 01:28:14.912711  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.912720  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:14.912726  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:14.912787  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:14.954050  142411 cri.go:89] found id: ""
	I0420 01:28:14.954076  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.954083  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:14.954089  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:14.954150  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:14.992310  142411 cri.go:89] found id: ""
	I0420 01:28:14.992348  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.992357  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:14.992365  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:14.992388  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:15.047471  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:15.047512  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:15.065800  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:15.065842  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:15.146009  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:15.146037  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:15.146058  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:15.232920  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:15.232962  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:17.781215  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:17.797404  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:17.797466  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:17.840532  142411 cri.go:89] found id: ""
	I0420 01:28:17.840564  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.840573  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:17.840579  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:17.840636  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:17.881562  142411 cri.go:89] found id: ""
	I0420 01:28:17.881588  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.881596  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:17.881602  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:17.881651  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:17.935068  142411 cri.go:89] found id: ""
	I0420 01:28:17.935098  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.935108  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:17.935115  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:17.935177  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:17.980745  142411 cri.go:89] found id: ""
	I0420 01:28:17.980782  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.980795  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:17.980804  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:17.980880  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:18.051120  142411 cri.go:89] found id: ""
	I0420 01:28:18.051153  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.051164  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:18.051171  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:18.051235  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:18.091741  142411 cri.go:89] found id: ""
	I0420 01:28:18.091776  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.091788  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:18.091796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:18.091864  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:18.133438  142411 cri.go:89] found id: ""
	I0420 01:28:18.133472  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.133482  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:18.133488  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:18.133560  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:18.174624  142411 cri.go:89] found id: ""
	I0420 01:28:18.174665  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.174679  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:18.174694  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:18.174713  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:18.228519  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:18.228563  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:18.246452  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:18.246487  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:18.322051  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:18.322074  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:18.322088  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:18.404873  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:18.404904  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:20.950553  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:20.965081  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:20.965139  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:21.007198  142411 cri.go:89] found id: ""
	I0420 01:28:21.007243  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.007255  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:21.007263  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:21.007330  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:21.050991  142411 cri.go:89] found id: ""
	I0420 01:28:21.051019  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.051028  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:21.051034  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:21.051104  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:21.091953  142411 cri.go:89] found id: ""
	I0420 01:28:21.091986  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.091995  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:21.092001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:21.092085  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:21.134134  142411 cri.go:89] found id: ""
	I0420 01:28:21.134164  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.134174  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:21.134181  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:21.134251  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:21.173698  142411 cri.go:89] found id: ""
	I0420 01:28:21.173724  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.173731  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:21.173737  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:21.173801  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:21.221327  142411 cri.go:89] found id: ""
	I0420 01:28:21.221354  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.221362  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:21.221369  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:21.221428  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:21.262752  142411 cri.go:89] found id: ""
	I0420 01:28:21.262780  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.262791  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:21.262798  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:21.262851  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:21.303497  142411 cri.go:89] found id: ""
	I0420 01:28:21.303524  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.303535  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:21.303547  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:21.303563  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:21.358231  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:21.358265  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:21.373723  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:21.373753  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:21.465016  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:21.465044  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:21.465061  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:21.552087  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:21.552117  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:24.099938  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:24.116967  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:24.117045  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:24.159458  142411 cri.go:89] found id: ""
	I0420 01:28:24.159491  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.159501  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:24.159508  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:24.159574  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:24.206028  142411 cri.go:89] found id: ""
	I0420 01:28:24.206054  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.206065  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:24.206072  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:24.206137  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:24.248047  142411 cri.go:89] found id: ""
	I0420 01:28:24.248088  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.248101  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:24.248109  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:24.248176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:24.287867  142411 cri.go:89] found id: ""
	I0420 01:28:24.287898  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.287909  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:24.287917  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:24.287995  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:24.329399  142411 cri.go:89] found id: ""
	I0420 01:28:24.329433  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.329444  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:24.329452  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:24.329519  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:24.367846  142411 cri.go:89] found id: ""
	I0420 01:28:24.367871  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.367882  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:24.367889  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:24.367960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:24.414245  142411 cri.go:89] found id: ""
	I0420 01:28:24.414272  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.414283  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:24.414291  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:24.414354  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:24.453268  142411 cri.go:89] found id: ""
	I0420 01:28:24.453302  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.453331  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:24.453344  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:24.453366  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:24.514501  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:24.514546  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:24.529551  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:24.529591  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:24.613734  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:24.613757  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:24.613775  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:24.693804  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:24.693843  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:27.238443  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:27.254172  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:27.254235  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:27.297048  142411 cri.go:89] found id: ""
	I0420 01:28:27.297101  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.297111  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:27.297119  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:27.297181  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:27.340145  142411 cri.go:89] found id: ""
	I0420 01:28:27.340171  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.340181  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:27.340189  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:27.340316  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:27.383047  142411 cri.go:89] found id: ""
	I0420 01:28:27.383077  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.383089  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:27.383096  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:27.383169  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:27.428088  142411 cri.go:89] found id: ""
	I0420 01:28:27.428122  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.428134  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:27.428142  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:27.428206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:27.468257  142411 cri.go:89] found id: ""
	I0420 01:28:27.468300  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.468310  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:27.468317  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:27.468389  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:27.508834  142411 cri.go:89] found id: ""
	I0420 01:28:27.508873  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.508885  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:27.508892  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:27.508953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:27.548853  142411 cri.go:89] found id: ""
	I0420 01:28:27.548893  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.548901  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:27.548908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:27.548956  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:27.587841  142411 cri.go:89] found id: ""
	I0420 01:28:27.587875  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.587886  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:27.587899  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:27.587917  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:27.667848  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:27.667888  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:27.714820  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:27.714856  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:27.766337  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:27.766381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:27.782585  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:27.782627  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:27.856172  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:30.356809  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:30.372449  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:30.372529  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:30.422164  142411 cri.go:89] found id: ""
	I0420 01:28:30.422198  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.422209  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:30.422218  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:30.422283  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:30.460367  142411 cri.go:89] found id: ""
	I0420 01:28:30.460395  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.460404  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:30.460411  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:30.460498  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:30.508423  142411 cri.go:89] found id: ""
	I0420 01:28:30.508460  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.508471  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:30.508479  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:30.508546  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:30.553124  142411 cri.go:89] found id: ""
	I0420 01:28:30.553152  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.553161  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:30.553167  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:30.553225  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:30.601866  142411 cri.go:89] found id: ""
	I0420 01:28:30.601908  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.601919  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:30.601939  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:30.602014  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:30.645413  142411 cri.go:89] found id: ""
	I0420 01:28:30.645446  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.645457  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:30.645467  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:30.645539  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:30.690955  142411 cri.go:89] found id: ""
	I0420 01:28:30.690988  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.690997  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:30.691006  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:30.691077  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:30.732146  142411 cri.go:89] found id: ""
	I0420 01:28:30.732186  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.732197  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:30.732209  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:30.732228  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:30.786890  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:30.786928  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:30.802887  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:30.802920  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:30.884422  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:30.884447  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:30.884461  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:30.967504  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:30.967540  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:33.515720  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:33.531895  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:33.531953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:33.574626  142411 cri.go:89] found id: ""
	I0420 01:28:33.574668  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.574682  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:33.574690  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:33.574757  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:33.620527  142411 cri.go:89] found id: ""
	I0420 01:28:33.620553  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.620562  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:33.620568  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:33.620630  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:33.659685  142411 cri.go:89] found id: ""
	I0420 01:28:33.659711  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.659719  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:33.659724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:33.659773  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:33.699390  142411 cri.go:89] found id: ""
	I0420 01:28:33.699414  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.699422  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:33.699427  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:33.699485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:33.743819  142411 cri.go:89] found id: ""
	I0420 01:28:33.743844  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.743852  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:33.743858  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:33.743907  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:33.788416  142411 cri.go:89] found id: ""
	I0420 01:28:33.788442  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.788450  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:33.788456  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:33.788514  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:33.834105  142411 cri.go:89] found id: ""
	I0420 01:28:33.834129  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.834138  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:33.834144  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:33.834206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:33.884118  142411 cri.go:89] found id: ""
	I0420 01:28:33.884152  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.884164  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:33.884176  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:33.884193  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:33.940493  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:33.940525  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:33.954800  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:33.954829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:34.030788  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:34.030812  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:34.030829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:34.119533  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:34.119574  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:36.667132  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:36.684253  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:36.684334  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:36.723598  142411 cri.go:89] found id: ""
	I0420 01:28:36.723629  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.723641  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:36.723649  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:36.723718  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:36.761563  142411 cri.go:89] found id: ""
	I0420 01:28:36.761594  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.761606  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:36.761614  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:36.761679  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:36.803553  142411 cri.go:89] found id: ""
	I0420 01:28:36.803590  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.803603  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:36.803611  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:36.803674  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:36.840368  142411 cri.go:89] found id: ""
	I0420 01:28:36.840407  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.840421  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:36.840430  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:36.840497  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:36.879689  142411 cri.go:89] found id: ""
	I0420 01:28:36.879724  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.879735  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:36.879743  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:36.879807  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:36.920757  142411 cri.go:89] found id: ""
	I0420 01:28:36.920785  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.920796  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:36.920809  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:36.920871  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:36.957522  142411 cri.go:89] found id: ""
	I0420 01:28:36.957548  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.957556  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:36.957562  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:36.957624  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:36.997358  142411 cri.go:89] found id: ""
	I0420 01:28:36.997390  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.997400  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:36.997409  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:36.997422  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:37.055063  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:37.055105  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:37.070691  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:37.070720  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:37.150114  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:37.150140  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:37.150152  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:37.228676  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:37.228711  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:39.776620  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:39.792201  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:39.792268  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:39.831544  142411 cri.go:89] found id: ""
	I0420 01:28:39.831568  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.831576  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:39.831588  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:39.831652  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:39.869458  142411 cri.go:89] found id: ""
	I0420 01:28:39.869488  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.869496  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:39.869503  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:39.869564  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:39.911588  142411 cri.go:89] found id: ""
	I0420 01:28:39.911615  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.911626  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:39.911633  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:39.911703  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:39.952458  142411 cri.go:89] found id: ""
	I0420 01:28:39.952489  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.952505  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:39.952513  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:39.952580  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:39.992988  142411 cri.go:89] found id: ""
	I0420 01:28:39.993016  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.993023  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:39.993029  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:39.993117  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:40.038306  142411 cri.go:89] found id: ""
	I0420 01:28:40.038348  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.038359  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:40.038367  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:40.038432  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:40.082185  142411 cri.go:89] found id: ""
	I0420 01:28:40.082219  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.082230  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:40.082238  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:40.082332  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:40.120346  142411 cri.go:89] found id: ""
	I0420 01:28:40.120373  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.120382  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:40.120391  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:40.120405  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:40.173735  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:40.173769  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:40.191808  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:40.191844  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:40.271429  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:40.271456  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:40.271473  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:40.361519  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:40.361558  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:42.938354  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:42.953088  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:42.953167  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:42.992539  142411 cri.go:89] found id: ""
	I0420 01:28:42.992564  142411 logs.go:276] 0 containers: []
	W0420 01:28:42.992571  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:42.992577  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:42.992637  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:43.032017  142411 cri.go:89] found id: ""
	I0420 01:28:43.032059  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.032074  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:43.032082  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:43.032142  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:43.077229  142411 cri.go:89] found id: ""
	I0420 01:28:43.077258  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.077266  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:43.077272  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:43.077342  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:43.117107  142411 cri.go:89] found id: ""
	I0420 01:28:43.117128  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.117139  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:43.117145  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:43.117206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:43.156262  142411 cri.go:89] found id: ""
	I0420 01:28:43.156297  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.156310  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:43.156317  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:43.156384  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:43.195897  142411 cri.go:89] found id: ""
	I0420 01:28:43.195927  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.195935  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:43.195942  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:43.195990  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:43.230468  142411 cri.go:89] found id: ""
	I0420 01:28:43.230498  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.230513  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:43.230522  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:43.230586  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:43.271980  142411 cri.go:89] found id: ""
	I0420 01:28:43.272009  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.272023  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:43.272035  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:43.272050  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:43.331606  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:43.331641  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:43.348411  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:43.348437  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:43.428628  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:43.428654  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:43.428675  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:43.511471  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:43.511506  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:46.056166  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:46.071677  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:46.071744  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:46.110710  142411 cri.go:89] found id: ""
	I0420 01:28:46.110740  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.110753  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:46.110761  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:46.110825  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:46.170680  142411 cri.go:89] found id: ""
	I0420 01:28:46.170712  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.170724  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:46.170731  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:46.170794  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:46.216387  142411 cri.go:89] found id: ""
	I0420 01:28:46.216413  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.216421  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:46.216429  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:46.216485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:46.258641  142411 cri.go:89] found id: ""
	I0420 01:28:46.258674  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.258685  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:46.258694  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:46.258755  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:46.296359  142411 cri.go:89] found id: ""
	I0420 01:28:46.296395  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.296407  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:46.296416  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:46.296480  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:46.335194  142411 cri.go:89] found id: ""
	I0420 01:28:46.335223  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.335238  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:46.335247  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:46.335300  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:46.373748  142411 cri.go:89] found id: ""
	I0420 01:28:46.373777  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.373789  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:46.373796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:46.373860  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:46.416960  142411 cri.go:89] found id: ""
	I0420 01:28:46.416987  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.416995  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:46.417005  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:46.417017  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:46.497542  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:46.497582  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:46.548086  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:46.548136  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:46.607354  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:46.607390  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:46.624379  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:46.624415  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:46.707425  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:49.208459  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:49.223081  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:49.223146  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:49.258688  142411 cri.go:89] found id: ""
	I0420 01:28:49.258718  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.258728  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:49.258734  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:49.258791  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:49.296817  142411 cri.go:89] found id: ""
	I0420 01:28:49.296859  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.296870  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:49.296878  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:49.296941  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:49.337821  142411 cri.go:89] found id: ""
	I0420 01:28:49.337853  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.337863  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:49.337870  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:49.337940  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:49.381360  142411 cri.go:89] found id: ""
	I0420 01:28:49.381384  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.381392  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:49.381397  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:49.381463  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:49.420099  142411 cri.go:89] found id: ""
	I0420 01:28:49.420143  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.420154  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:49.420162  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:49.420223  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:49.459810  142411 cri.go:89] found id: ""
	I0420 01:28:49.459843  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.459850  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:49.459859  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:49.459911  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:49.499776  142411 cri.go:89] found id: ""
	I0420 01:28:49.499808  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.499820  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:49.499828  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:49.499894  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:49.536115  142411 cri.go:89] found id: ""
	I0420 01:28:49.536147  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.536158  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:49.536169  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:49.536190  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:49.594665  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:49.594701  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:49.611896  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:49.611929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:49.689667  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:49.689685  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:49.689697  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:49.769061  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:49.769106  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:52.319299  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:52.336861  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:52.336934  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:52.380690  142411 cri.go:89] found id: ""
	I0420 01:28:52.380717  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.380725  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:52.380731  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:52.380781  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:52.429798  142411 cri.go:89] found id: ""
	I0420 01:28:52.429831  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.429843  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:52.429851  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:52.429915  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:52.474087  142411 cri.go:89] found id: ""
	I0420 01:28:52.474120  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.474130  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:52.474139  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:52.474204  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:52.514739  142411 cri.go:89] found id: ""
	I0420 01:28:52.514776  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.514789  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:52.514796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:52.514852  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:52.562100  142411 cri.go:89] found id: ""
	I0420 01:28:52.562195  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.562228  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:52.562236  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:52.562324  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:52.623266  142411 cri.go:89] found id: ""
	I0420 01:28:52.623301  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.623313  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:52.623321  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:52.623386  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:52.667788  142411 cri.go:89] found id: ""
	I0420 01:28:52.667818  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.667828  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:52.667838  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:52.667902  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:52.724607  142411 cri.go:89] found id: ""
	I0420 01:28:52.724636  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.724645  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:52.724654  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:52.724666  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:52.774798  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:52.774836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:52.833949  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:52.833989  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:52.851757  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:52.851787  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:52.939092  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:52.939119  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:52.939136  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:55.525807  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:55.540481  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:55.540557  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:55.584415  142411 cri.go:89] found id: ""
	I0420 01:28:55.584447  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.584458  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:55.584466  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:55.584538  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:55.623920  142411 cri.go:89] found id: ""
	I0420 01:28:55.623955  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.623965  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:55.623973  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:55.624037  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:55.667768  142411 cri.go:89] found id: ""
	I0420 01:28:55.667802  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.667810  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:55.667816  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:55.667889  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:55.708466  142411 cri.go:89] found id: ""
	I0420 01:28:55.708502  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.708513  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:55.708520  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:55.708600  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:55.748797  142411 cri.go:89] found id: ""
	I0420 01:28:55.748838  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.748849  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:55.748857  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:55.748919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:55.791714  142411 cri.go:89] found id: ""
	I0420 01:28:55.791743  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.791752  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:55.791761  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:55.791832  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:55.833836  142411 cri.go:89] found id: ""
	I0420 01:28:55.833862  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.833872  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:55.833879  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:55.833942  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:55.877425  142411 cri.go:89] found id: ""
	I0420 01:28:55.877462  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.877472  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:55.877484  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:55.877501  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:55.933237  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:55.933280  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:55.949507  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:55.949534  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:56.025596  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:56.025624  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:56.025641  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:56.105403  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:56.105439  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:58.653368  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:58.669367  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:58.669429  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:58.712457  142411 cri.go:89] found id: ""
	I0420 01:28:58.712490  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.712501  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:58.712508  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:58.712574  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:58.750246  142411 cri.go:89] found id: ""
	I0420 01:28:58.750273  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.750281  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:58.750287  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:58.750351  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:58.793486  142411 cri.go:89] found id: ""
	I0420 01:28:58.793514  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.793522  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:58.793529  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:58.793595  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:58.839413  142411 cri.go:89] found id: ""
	I0420 01:28:58.839448  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.839461  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:58.839469  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:58.839537  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:58.881385  142411 cri.go:89] found id: ""
	I0420 01:28:58.881418  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.881430  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:58.881438  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:58.881509  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:58.923900  142411 cri.go:89] found id: ""
	I0420 01:28:58.923945  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.923965  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:58.923975  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:58.924038  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:58.962795  142411 cri.go:89] found id: ""
	I0420 01:28:58.962836  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.962848  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:58.962856  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:58.962919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:59.006309  142411 cri.go:89] found id: ""
	I0420 01:28:59.006341  142411 logs.go:276] 0 containers: []
	W0420 01:28:59.006350  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:59.006360  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:59.006372  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:59.062778  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:59.062819  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:59.078600  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:59.078630  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:59.159340  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:59.159361  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:59.159376  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:59.247257  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:59.247307  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:01.792687  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:01.808507  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:01.808588  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:01.851642  142411 cri.go:89] found id: ""
	I0420 01:29:01.851680  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.851691  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:01.851699  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:01.851765  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:01.891516  142411 cri.go:89] found id: ""
	I0420 01:29:01.891549  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.891560  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:01.891568  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:01.891640  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:01.934353  142411 cri.go:89] found id: ""
	I0420 01:29:01.934390  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.934402  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:01.934410  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:01.934479  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:01.972552  142411 cri.go:89] found id: ""
	I0420 01:29:01.972587  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.972599  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:01.972607  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:01.972711  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:02.012316  142411 cri.go:89] found id: ""
	I0420 01:29:02.012348  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.012360  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:02.012368  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:02.012423  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:02.056951  142411 cri.go:89] found id: ""
	I0420 01:29:02.056984  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.056994  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:02.057001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:02.057164  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:02.104061  142411 cri.go:89] found id: ""
	I0420 01:29:02.104091  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.104102  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:02.104110  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:02.104163  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:02.144085  142411 cri.go:89] found id: ""
	I0420 01:29:02.144114  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.144125  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:02.144137  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:02.144160  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:02.216560  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:02.216585  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:02.216598  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:02.307178  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:02.307222  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:02.349769  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:02.349798  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:02.401141  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:02.401176  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:04.917513  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:04.934187  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:04.934266  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:04.970258  142411 cri.go:89] found id: ""
	I0420 01:29:04.970289  142411 logs.go:276] 0 containers: []
	W0420 01:29:04.970298  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:04.970304  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:04.970359  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:05.012853  142411 cri.go:89] found id: ""
	I0420 01:29:05.012883  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.012893  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:05.012899  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:05.012960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:05.054793  142411 cri.go:89] found id: ""
	I0420 01:29:05.054822  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.054833  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:05.054842  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:05.054910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:05.094637  142411 cri.go:89] found id: ""
	I0420 01:29:05.094674  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.094684  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:05.094701  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:05.094770  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:05.134874  142411 cri.go:89] found id: ""
	I0420 01:29:05.134903  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.134912  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:05.134918  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:05.134973  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:05.175637  142411 cri.go:89] found id: ""
	I0420 01:29:05.175668  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.175679  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:05.175687  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:05.175752  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:05.217809  142411 cri.go:89] found id: ""
	I0420 01:29:05.217847  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.217860  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:05.217867  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:05.217933  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:05.266884  142411 cri.go:89] found id: ""
	I0420 01:29:05.266917  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.266930  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:05.266941  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:05.266958  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:05.323765  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:05.323818  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:05.338524  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:05.338553  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:05.419860  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:05.419889  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:05.419906  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:05.506268  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:05.506311  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:08.055690  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:08.072692  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:08.072758  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:08.116247  142411 cri.go:89] found id: ""
	I0420 01:29:08.116287  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.116296  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:08.116304  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:08.116369  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:08.163152  142411 cri.go:89] found id: ""
	I0420 01:29:08.163177  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.163185  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:08.163190  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:08.163246  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:08.207330  142411 cri.go:89] found id: ""
	I0420 01:29:08.207357  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.207365  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:08.207371  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:08.207422  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:08.249833  142411 cri.go:89] found id: ""
	I0420 01:29:08.249864  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.249873  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:08.249879  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:08.249941  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:08.290834  142411 cri.go:89] found id: ""
	I0420 01:29:08.290867  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.290876  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:08.290883  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:08.290957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:08.333767  142411 cri.go:89] found id: ""
	I0420 01:29:08.333799  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.333809  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:08.333816  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:08.333888  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:08.381431  142411 cri.go:89] found id: ""
	I0420 01:29:08.381459  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.381468  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:08.381474  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:08.381532  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:08.423702  142411 cri.go:89] found id: ""
	I0420 01:29:08.423727  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.423739  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:08.423751  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:08.423767  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:08.468422  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:08.468460  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:08.524091  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:08.524125  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:08.540294  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:08.540323  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:08.622439  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:08.622472  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:08.622488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:11.208472  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:11.225412  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:11.225479  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:11.273723  142411 cri.go:89] found id: ""
	I0420 01:29:11.273755  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.273767  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:11.273775  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:11.273840  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:11.316083  142411 cri.go:89] found id: ""
	I0420 01:29:11.316118  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.316130  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:11.316137  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:11.316203  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:11.355632  142411 cri.go:89] found id: ""
	I0420 01:29:11.355659  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.355668  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:11.355674  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:11.355734  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:11.397277  142411 cri.go:89] found id: ""
	I0420 01:29:11.397305  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.397327  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:11.397335  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:11.397399  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:11.439333  142411 cri.go:89] found id: ""
	I0420 01:29:11.439357  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.439366  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:11.439372  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:11.439433  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:11.477044  142411 cri.go:89] found id: ""
	I0420 01:29:11.477072  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.477079  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:11.477086  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:11.477142  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:11.516150  142411 cri.go:89] found id: ""
	I0420 01:29:11.516184  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.516196  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:11.516204  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:11.516274  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:11.557272  142411 cri.go:89] found id: ""
	I0420 01:29:11.557303  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.557331  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:11.557344  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:11.557366  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:11.652272  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:11.652319  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:11.700469  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:11.700504  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:11.756674  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:11.756711  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:11.772377  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:11.772407  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:11.851387  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:14.352257  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:14.367635  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:14.367714  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:14.408757  142411 cri.go:89] found id: ""
	I0420 01:29:14.408779  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.408788  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:14.408794  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:14.408843  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:14.455123  142411 cri.go:89] found id: ""
	I0420 01:29:14.455150  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.455159  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:14.455165  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:14.455239  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:14.499546  142411 cri.go:89] found id: ""
	I0420 01:29:14.499573  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.499581  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:14.499587  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:14.499635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:14.541811  142411 cri.go:89] found id: ""
	I0420 01:29:14.541841  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.541851  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:14.541859  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:14.541923  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:14.586965  142411 cri.go:89] found id: ""
	I0420 01:29:14.586990  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.587001  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:14.587008  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:14.587071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:14.625251  142411 cri.go:89] found id: ""
	I0420 01:29:14.625279  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.625288  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:14.625294  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:14.625377  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:14.665038  142411 cri.go:89] found id: ""
	I0420 01:29:14.665067  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.665079  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:14.665086  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:14.665157  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:14.706931  142411 cri.go:89] found id: ""
	I0420 01:29:14.706964  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.706978  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:14.706992  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:14.707044  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:14.761681  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:14.761717  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:14.776324  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:14.776350  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:14.856707  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:14.856727  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:14.856738  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:14.944019  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:14.944064  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:17.489112  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:17.507594  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:17.507660  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:17.556091  142411 cri.go:89] found id: ""
	I0420 01:29:17.556122  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.556132  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:17.556140  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:17.556205  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:17.600016  142411 cri.go:89] found id: ""
	I0420 01:29:17.600072  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.600086  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:17.600107  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:17.600171  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:17.643074  142411 cri.go:89] found id: ""
	I0420 01:29:17.643106  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.643118  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:17.643125  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:17.643190  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:17.684798  142411 cri.go:89] found id: ""
	I0420 01:29:17.684827  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.684838  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:17.684845  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:17.684910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:17.725451  142411 cri.go:89] found id: ""
	I0420 01:29:17.725481  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.725494  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:17.725503  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:17.725575  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:17.765918  142411 cri.go:89] found id: ""
	I0420 01:29:17.765944  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.765952  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:17.765959  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:17.766023  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:17.806011  142411 cri.go:89] found id: ""
	I0420 01:29:17.806038  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.806049  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:17.806056  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:17.806122  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:17.848409  142411 cri.go:89] found id: ""
	I0420 01:29:17.848441  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.848453  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:17.848465  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:17.848488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:17.903854  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:17.903900  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:17.919156  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:17.919191  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:18.008073  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:18.008115  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:18.008133  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:18.095887  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:18.095929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:20.646919  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:20.664559  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:20.664635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:20.714440  142411 cri.go:89] found id: ""
	I0420 01:29:20.714472  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.714481  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:20.714487  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:20.714543  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:20.755249  142411 cri.go:89] found id: ""
	I0420 01:29:20.755276  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.755287  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:20.755294  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:20.755355  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:20.795744  142411 cri.go:89] found id: ""
	I0420 01:29:20.795777  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.795786  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:20.795797  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:20.795864  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:20.838083  142411 cri.go:89] found id: ""
	I0420 01:29:20.838111  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.838120  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:20.838128  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:20.838193  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:20.880198  142411 cri.go:89] found id: ""
	I0420 01:29:20.880227  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.880238  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:20.880245  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:20.880312  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:20.920496  142411 cri.go:89] found id: ""
	I0420 01:29:20.920522  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.920530  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:20.920536  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:20.920618  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:20.960137  142411 cri.go:89] found id: ""
	I0420 01:29:20.960170  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.960180  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:20.960186  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:20.960251  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:20.999583  142411 cri.go:89] found id: ""
	I0420 01:29:20.999624  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.999637  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:20.999649  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:20.999665  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:21.077439  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:21.077476  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:21.121104  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:21.121148  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:21.173871  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:21.173909  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:21.189767  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:21.189795  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:21.264715  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:23.765605  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:23.782250  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:23.782334  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:23.827248  142411 cri.go:89] found id: ""
	I0420 01:29:23.827277  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.827285  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:23.827291  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:23.827349  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:23.867610  142411 cri.go:89] found id: ""
	I0420 01:29:23.867636  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.867645  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:23.867651  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:23.867712  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:23.906244  142411 cri.go:89] found id: ""
	I0420 01:29:23.906271  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.906278  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:23.906283  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:23.906343  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:23.952256  142411 cri.go:89] found id: ""
	I0420 01:29:23.952288  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.952306  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:23.952314  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:23.952378  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:23.992843  142411 cri.go:89] found id: ""
	I0420 01:29:23.992879  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.992888  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:23.992896  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:23.992959  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:24.036460  142411 cri.go:89] found id: ""
	I0420 01:29:24.036493  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.036504  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:24.036512  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:24.036582  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:24.075910  142411 cri.go:89] found id: ""
	I0420 01:29:24.075944  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.075955  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:24.075962  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:24.076033  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:24.122638  142411 cri.go:89] found id: ""
	I0420 01:29:24.122676  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.122688  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:24.122698  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:24.122717  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:24.138022  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:24.138061  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:24.220977  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:24.220998  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:24.221012  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:24.302928  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:24.302972  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:24.351237  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:24.351277  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:26.910354  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:26.926815  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:26.926900  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:26.966123  142411 cri.go:89] found id: ""
	I0420 01:29:26.966155  142411 logs.go:276] 0 containers: []
	W0420 01:29:26.966165  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:26.966172  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:26.966246  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:27.011679  142411 cri.go:89] found id: ""
	I0420 01:29:27.011714  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.011727  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:27.011735  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:27.011806  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:27.052116  142411 cri.go:89] found id: ""
	I0420 01:29:27.052141  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.052148  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:27.052155  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:27.052202  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:27.090375  142411 cri.go:89] found id: ""
	I0420 01:29:27.090404  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.090413  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:27.090419  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:27.090476  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:27.131911  142411 cri.go:89] found id: ""
	I0420 01:29:27.131946  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.131957  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:27.131965  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:27.132033  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:27.176663  142411 cri.go:89] found id: ""
	I0420 01:29:27.176696  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.176714  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:27.176723  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:27.176788  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:27.217806  142411 cri.go:89] found id: ""
	I0420 01:29:27.217836  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.217846  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:27.217853  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:27.217917  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:27.253956  142411 cri.go:89] found id: ""
	I0420 01:29:27.253981  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.253989  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:27.253998  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:27.254014  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:27.298225  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:27.298264  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:27.351213  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:27.351259  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:27.366352  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:27.366388  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:27.466716  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:27.466742  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:27.466770  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:30.050528  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:30.065697  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:30.065769  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:30.104643  142411 cri.go:89] found id: ""
	I0420 01:29:30.104675  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.104686  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:30.104694  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:30.104753  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:30.143864  142411 cri.go:89] found id: ""
	I0420 01:29:30.143892  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.143903  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:30.143910  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:30.143976  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:30.187925  142411 cri.go:89] found id: ""
	I0420 01:29:30.187954  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.187964  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:30.187972  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:30.188035  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:30.227968  142411 cri.go:89] found id: ""
	I0420 01:29:30.227995  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.228003  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:30.228009  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:30.228059  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:30.269550  142411 cri.go:89] found id: ""
	I0420 01:29:30.269584  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.269596  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:30.269604  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:30.269672  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:30.311777  142411 cri.go:89] found id: ""
	I0420 01:29:30.311810  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.311819  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:30.311827  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:30.311878  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:30.353569  142411 cri.go:89] found id: ""
	I0420 01:29:30.353601  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.353610  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:30.353617  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:30.353683  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:30.395003  142411 cri.go:89] found id: ""
	I0420 01:29:30.395032  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.395043  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:30.395054  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:30.395066  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:30.455495  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:30.455536  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:30.473749  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:30.473778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:30.555370  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:30.555397  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:30.555417  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:30.637079  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:30.637124  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:33.188917  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:33.203689  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:33.203757  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:33.246796  142411 cri.go:89] found id: ""
	I0420 01:29:33.246828  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.246840  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:33.246848  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:33.246911  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:33.284667  142411 cri.go:89] found id: ""
	I0420 01:29:33.284700  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.284712  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:33.284720  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:33.284782  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:33.328653  142411 cri.go:89] found id: ""
	I0420 01:29:33.328688  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.328701  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:33.328709  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:33.328777  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:33.369081  142411 cri.go:89] found id: ""
	I0420 01:29:33.369107  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.369121  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:33.369130  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:33.369180  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:33.414282  142411 cri.go:89] found id: ""
	I0420 01:29:33.414313  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.414322  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:33.414327  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:33.414411  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:33.457086  142411 cri.go:89] found id: ""
	I0420 01:29:33.457112  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.457119  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:33.457126  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:33.457176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:33.498686  142411 cri.go:89] found id: ""
	I0420 01:29:33.498716  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.498729  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:33.498738  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:33.498808  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:33.538872  142411 cri.go:89] found id: ""
	I0420 01:29:33.538907  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.538920  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:33.538932  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:33.538959  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:33.592586  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:33.592631  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:33.609200  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:33.609226  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:33.690795  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:33.690820  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:33.690836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:33.776092  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:33.776131  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:36.331256  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:36.348813  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:36.348892  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:36.397503  142411 cri.go:89] found id: ""
	I0420 01:29:36.397527  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.397534  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:36.397540  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:36.397603  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:36.439638  142411 cri.go:89] found id: ""
	I0420 01:29:36.439667  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.439675  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:36.439685  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:36.439761  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:36.477155  142411 cri.go:89] found id: ""
	I0420 01:29:36.477182  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.477194  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:36.477201  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:36.477259  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:36.533326  142411 cri.go:89] found id: ""
	I0420 01:29:36.533360  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.533373  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:36.533381  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:36.533446  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:36.573056  142411 cri.go:89] found id: ""
	I0420 01:29:36.573093  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.573107  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:36.573114  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:36.573177  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:36.611901  142411 cri.go:89] found id: ""
	I0420 01:29:36.611937  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.611949  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:36.611957  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:36.612017  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:36.656780  142411 cri.go:89] found id: ""
	I0420 01:29:36.656810  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.656823  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:36.656830  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:36.656899  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:36.699872  142411 cri.go:89] found id: ""
	I0420 01:29:36.699906  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.699916  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:36.699928  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:36.699943  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:36.758859  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:36.758895  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:36.775108  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:36.775145  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:36.858001  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:36.858027  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:36.858044  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:36.936114  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:36.936154  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:39.487167  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:39.502929  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:39.502995  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:39.547338  142411 cri.go:89] found id: ""
	I0420 01:29:39.547363  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.547371  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:39.547377  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:39.547430  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:39.608684  142411 cri.go:89] found id: ""
	I0420 01:29:39.608714  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.608722  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:39.608728  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:39.608793  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:39.679248  142411 cri.go:89] found id: ""
	I0420 01:29:39.679281  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.679292  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:39.679300  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:39.679361  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:39.725226  142411 cri.go:89] found id: ""
	I0420 01:29:39.725257  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.725270  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:39.725278  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:39.725363  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:39.767653  142411 cri.go:89] found id: ""
	I0420 01:29:39.767681  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.767690  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:39.767697  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:39.767760  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:39.807848  142411 cri.go:89] found id: ""
	I0420 01:29:39.807885  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.807893  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:39.807900  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:39.807968  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:39.847171  142411 cri.go:89] found id: ""
	I0420 01:29:39.847201  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.847212  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:39.847219  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:39.847284  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:39.884959  142411 cri.go:89] found id: ""
	I0420 01:29:39.884996  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.885007  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:39.885034  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:39.885050  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:39.959245  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:39.959269  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:39.959286  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:40.041394  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:40.041436  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:40.083125  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:40.083171  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:40.139902  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:40.139957  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:42.657038  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:42.673303  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:42.673407  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:42.717081  142411 cri.go:89] found id: ""
	I0420 01:29:42.717106  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.717114  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:42.717120  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:42.717170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:42.762322  142411 cri.go:89] found id: ""
	I0420 01:29:42.762357  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.762367  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:42.762375  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:42.762442  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:42.805059  142411 cri.go:89] found id: ""
	I0420 01:29:42.805112  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.805122  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:42.805131  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:42.805201  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:42.848539  142411 cri.go:89] found id: ""
	I0420 01:29:42.848568  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.848580  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:42.848587  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:42.848679  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:42.887915  142411 cri.go:89] found id: ""
	I0420 01:29:42.887949  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.887960  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:42.887967  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:42.888032  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:42.938832  142411 cri.go:89] found id: ""
	I0420 01:29:42.938867  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.938878  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:42.938888  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:42.938957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:42.982376  142411 cri.go:89] found id: ""
	I0420 01:29:42.982402  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.982409  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:42.982415  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:42.982477  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:43.023264  142411 cri.go:89] found id: ""
	I0420 01:29:43.023293  142411 logs.go:276] 0 containers: []
	W0420 01:29:43.023301  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:43.023313  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:43.023326  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:43.079673  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:43.079714  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:43.094753  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:43.094786  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:43.180113  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:43.180149  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:43.180177  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:43.259830  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:43.259872  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:45.802515  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:45.816908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:45.816965  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:45.861091  142411 cri.go:89] found id: ""
	I0420 01:29:45.861123  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.861132  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:45.861138  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:45.861224  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:45.901677  142411 cri.go:89] found id: ""
	I0420 01:29:45.901702  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.901710  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:45.901716  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:45.901767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:45.938301  142411 cri.go:89] found id: ""
	I0420 01:29:45.938325  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.938334  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:45.938339  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:45.938393  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:45.978432  142411 cri.go:89] found id: ""
	I0420 01:29:45.978460  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.978473  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:45.978479  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:45.978537  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:46.019410  142411 cri.go:89] found id: ""
	I0420 01:29:46.019446  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.019455  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:46.019461  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:46.019524  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:46.071002  142411 cri.go:89] found id: ""
	I0420 01:29:46.071032  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.071041  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:46.071052  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:46.071124  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:46.110362  142411 cri.go:89] found id: ""
	I0420 01:29:46.110391  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.110402  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:46.110409  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:46.110477  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:46.152276  142411 cri.go:89] found id: ""
	I0420 01:29:46.152311  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.152322  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:46.152334  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:46.152351  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:46.205121  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:46.205159  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:46.221808  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:46.221842  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:46.300394  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:46.300418  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:46.300434  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:46.391961  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:46.392002  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:48.945086  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:48.961414  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:48.961491  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:49.010230  142411 cri.go:89] found id: ""
	I0420 01:29:49.010285  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.010299  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:49.010309  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:49.010385  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:49.054455  142411 cri.go:89] found id: ""
	I0420 01:29:49.054481  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.054491  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:49.054499  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:49.054566  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:49.094536  142411 cri.go:89] found id: ""
	I0420 01:29:49.094562  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.094572  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:49.094580  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:49.094740  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:49.134004  142411 cri.go:89] found id: ""
	I0420 01:29:49.134035  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.134046  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:49.134054  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:49.134118  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:49.173697  142411 cri.go:89] found id: ""
	I0420 01:29:49.173728  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.173741  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:49.173750  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:49.173817  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:49.215655  142411 cri.go:89] found id: ""
	I0420 01:29:49.215681  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.215689  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:49.215695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:49.215745  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:49.258282  142411 cri.go:89] found id: ""
	I0420 01:29:49.258312  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.258324  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:49.258332  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:49.258394  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:49.298565  142411 cri.go:89] found id: ""
	I0420 01:29:49.298597  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.298608  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:49.298620  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:49.298638  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:49.378833  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:49.378862  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:49.378880  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:49.467477  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:49.467517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:49.521747  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:49.521788  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:49.583386  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:49.583436  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:52.102969  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:52.122971  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:52.123053  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:52.166166  142411 cri.go:89] found id: ""
	I0420 01:29:52.166199  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.166210  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:52.166219  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:52.166287  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:52.206790  142411 cri.go:89] found id: ""
	I0420 01:29:52.206817  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.206824  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:52.206830  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:52.206889  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:52.249879  142411 cri.go:89] found id: ""
	I0420 01:29:52.249911  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.249921  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:52.249931  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:52.249997  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:52.293953  142411 cri.go:89] found id: ""
	I0420 01:29:52.293997  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.294009  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:52.294018  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:52.294095  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:52.339447  142411 cri.go:89] found id: ""
	I0420 01:29:52.339478  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.339490  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:52.339497  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:52.339558  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:52.378383  142411 cri.go:89] found id: ""
	I0420 01:29:52.378416  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.378428  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:52.378435  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:52.378488  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:52.423079  142411 cri.go:89] found id: ""
	I0420 01:29:52.423121  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.423130  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:52.423137  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:52.423205  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:52.459525  142411 cri.go:89] found id: ""
	I0420 01:29:52.459559  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.459572  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:52.459594  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:52.459610  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:52.567141  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:52.567186  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:52.618194  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:52.618235  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:52.681921  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:52.681959  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:52.699065  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:52.699108  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:52.776829  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:55.277933  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:55.293380  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:55.293455  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:55.337443  142411 cri.go:89] found id: ""
	I0420 01:29:55.337475  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.337483  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:55.337491  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:55.337557  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:55.375911  142411 cri.go:89] found id: ""
	I0420 01:29:55.375942  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.375951  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:55.375957  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:55.376022  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:55.418545  142411 cri.go:89] found id: ""
	I0420 01:29:55.418569  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.418577  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:55.418583  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:55.418635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:55.459343  142411 cri.go:89] found id: ""
	I0420 01:29:55.459378  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.459390  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:55.459397  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:55.459452  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:55.503851  142411 cri.go:89] found id: ""
	I0420 01:29:55.503878  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.503887  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:55.503895  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:55.503959  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:55.542533  142411 cri.go:89] found id: ""
	I0420 01:29:55.542556  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.542562  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:55.542568  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:55.542623  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:55.582205  142411 cri.go:89] found id: ""
	I0420 01:29:55.582236  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.582246  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:55.582252  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:55.582314  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:55.624727  142411 cri.go:89] found id: ""
	I0420 01:29:55.624757  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.624769  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:55.624781  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:55.624803  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:55.675403  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:55.675438  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:55.691492  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:55.691516  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:55.772283  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:55.772313  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:55.772330  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:55.859440  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:55.859477  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:58.406009  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:58.422305  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:58.422382  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:58.468206  142411 cri.go:89] found id: ""
	I0420 01:29:58.468303  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.468321  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:58.468329  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:58.468402  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:58.513981  142411 cri.go:89] found id: ""
	I0420 01:29:58.514018  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.514027  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:58.514041  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:58.514105  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:58.559967  142411 cri.go:89] found id: ""
	I0420 01:29:58.560000  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.560011  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:58.560019  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:58.560084  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:58.600710  142411 cri.go:89] found id: ""
	I0420 01:29:58.600744  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.600763  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:58.600771  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:58.600834  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:58.645995  142411 cri.go:89] found id: ""
	I0420 01:29:58.646022  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.646030  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:58.646036  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:58.646097  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:58.684930  142411 cri.go:89] found id: ""
	I0420 01:29:58.684957  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.684965  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:58.684972  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:58.685022  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:58.727225  142411 cri.go:89] found id: ""
	I0420 01:29:58.727251  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.727259  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:58.727265  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:58.727319  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:58.765244  142411 cri.go:89] found id: ""
	I0420 01:29:58.765282  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.765293  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:58.765303  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:58.765330  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:58.817791  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:58.817822  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:58.832882  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:58.832926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:58.919297  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:58.919325  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:58.919342  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:59.002590  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:59.002637  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:01.551854  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:01.568974  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:01.569054  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:01.609165  142411 cri.go:89] found id: ""
	I0420 01:30:01.609191  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.609200  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:01.609206  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:01.609272  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:01.653349  142411 cri.go:89] found id: ""
	I0420 01:30:01.653383  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.653396  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:01.653405  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:01.653482  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:01.698961  142411 cri.go:89] found id: ""
	I0420 01:30:01.698991  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.699002  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:01.699009  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:01.699063  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:01.739230  142411 cri.go:89] found id: ""
	I0420 01:30:01.739271  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.739283  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:01.739292  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:01.739376  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:01.781839  142411 cri.go:89] found id: ""
	I0420 01:30:01.781873  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.781885  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:01.781893  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:01.781960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:01.821212  142411 cri.go:89] found id: ""
	I0420 01:30:01.821241  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.821252  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:01.821259  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:01.821339  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:01.859959  142411 cri.go:89] found id: ""
	I0420 01:30:01.859984  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.859993  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:01.859999  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:01.860060  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:01.898832  142411 cri.go:89] found id: ""
	I0420 01:30:01.898858  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.898865  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:01.898875  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:01.898886  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:01.943065  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:01.943156  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:01.995618  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:01.995654  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:02.010489  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:02.010517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:02.090181  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:02.090222  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:02.090238  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:04.671376  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:04.687535  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:04.687629  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:04.728732  142411 cri.go:89] found id: ""
	I0420 01:30:04.728765  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.728778  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:04.728786  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:04.728854  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:04.768537  142411 cri.go:89] found id: ""
	I0420 01:30:04.768583  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.768602  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:04.768610  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:04.768676  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:04.811714  142411 cri.go:89] found id: ""
	I0420 01:30:04.811741  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.811750  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:04.811756  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:04.811816  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:04.852324  142411 cri.go:89] found id: ""
	I0420 01:30:04.852360  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.852371  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:04.852379  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:04.852452  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:04.891657  142411 cri.go:89] found id: ""
	I0420 01:30:04.891688  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.891700  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:04.891708  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:04.891774  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:04.933192  142411 cri.go:89] found id: ""
	I0420 01:30:04.933222  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.933230  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:04.933236  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:04.933291  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:04.972796  142411 cri.go:89] found id: ""
	I0420 01:30:04.972819  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.972828  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:04.972834  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:04.972888  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:05.014782  142411 cri.go:89] found id: ""
	I0420 01:30:05.014821  142411 logs.go:276] 0 containers: []
	W0420 01:30:05.014833  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:05.014846  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:05.014862  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:05.067438  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:05.067470  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:05.121336  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:05.121371  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:05.137495  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:05.137529  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:05.214132  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:05.214153  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:05.214170  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:07.796964  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:07.810856  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:07.810917  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:07.846993  142411 cri.go:89] found id: ""
	I0420 01:30:07.847024  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.847033  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:07.847040  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:07.847089  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:07.886422  142411 cri.go:89] found id: ""
	I0420 01:30:07.886452  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.886464  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:07.886474  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:07.886567  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:07.942200  142411 cri.go:89] found id: ""
	I0420 01:30:07.942230  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.942238  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:07.942245  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:07.942296  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:07.980179  142411 cri.go:89] found id: ""
	I0420 01:30:07.980215  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.980226  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:07.980235  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:07.980299  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:08.020097  142411 cri.go:89] found id: ""
	I0420 01:30:08.020130  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.020140  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:08.020145  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:08.020215  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:08.063793  142411 cri.go:89] found id: ""
	I0420 01:30:08.063837  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.063848  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:08.063857  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:08.063930  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:08.108674  142411 cri.go:89] found id: ""
	I0420 01:30:08.108705  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.108716  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:08.108724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:08.108798  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:08.147467  142411 cri.go:89] found id: ""
	I0420 01:30:08.147495  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.147503  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:08.147512  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:08.147525  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:08.239416  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:08.239466  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:08.294639  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:08.294669  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:08.349753  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:08.349795  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:08.368971  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:08.369003  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:08.449996  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:10.950318  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:10.964969  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:10.965032  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:11.006321  142411 cri.go:89] found id: ""
	I0420 01:30:11.006354  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.006365  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:11.006375  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:11.006437  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:11.047982  142411 cri.go:89] found id: ""
	I0420 01:30:11.048010  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.048019  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:11.048025  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:11.048073  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:11.089185  142411 cri.go:89] found id: ""
	I0420 01:30:11.089217  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.089226  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:11.089232  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:11.089287  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:11.131293  142411 cri.go:89] found id: ""
	I0420 01:30:11.131322  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.131335  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:11.131344  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:11.131398  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:11.170394  142411 cri.go:89] found id: ""
	I0420 01:30:11.170419  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.170427  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:11.170432  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:11.170485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:11.210580  142411 cri.go:89] found id: ""
	I0420 01:30:11.210619  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.210631  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:11.210640  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:11.210706  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:11.251938  142411 cri.go:89] found id: ""
	I0420 01:30:11.251977  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.251990  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:11.251998  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:11.252064  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:11.295999  142411 cri.go:89] found id: ""
	I0420 01:30:11.296033  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.296045  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:11.296057  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:11.296072  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:11.378564  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:11.378632  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:11.422836  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:11.422868  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:11.475893  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:11.475928  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:11.491524  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:11.491555  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:11.569066  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:14.070158  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:14.086000  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:14.086067  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:14.128864  142411 cri.go:89] found id: ""
	I0420 01:30:14.128894  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.128906  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:14.128914  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:14.128986  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:14.169447  142411 cri.go:89] found id: ""
	I0420 01:30:14.169482  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.169497  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:14.169506  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:14.169583  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:14.210007  142411 cri.go:89] found id: ""
	I0420 01:30:14.210043  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.210054  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:14.210062  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:14.210119  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:14.247652  142411 cri.go:89] found id: ""
	I0420 01:30:14.247685  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.247695  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:14.247703  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:14.247764  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:14.290788  142411 cri.go:89] found id: ""
	I0420 01:30:14.290820  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.290830  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:14.290847  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:14.290908  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:14.351514  142411 cri.go:89] found id: ""
	I0420 01:30:14.351548  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.351570  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:14.351581  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:14.351637  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:14.423481  142411 cri.go:89] found id: ""
	I0420 01:30:14.423520  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.423534  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:14.423543  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:14.423615  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:14.465597  142411 cri.go:89] found id: ""
	I0420 01:30:14.465622  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.465630  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:14.465639  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:14.465655  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:14.522669  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:14.522705  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:14.541258  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:14.541293  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:14.618657  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:14.618678  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:14.618691  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:14.702616  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:14.702658  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:17.256212  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:17.277171  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:17.277250  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:17.321548  142411 cri.go:89] found id: ""
	I0420 01:30:17.321582  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.321600  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:17.321607  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:17.321676  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:17.362856  142411 cri.go:89] found id: ""
	I0420 01:30:17.362883  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.362890  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:17.362896  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:17.362966  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:17.409494  142411 cri.go:89] found id: ""
	I0420 01:30:17.409525  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.409539  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:17.409548  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:17.409631  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:17.447759  142411 cri.go:89] found id: ""
	I0420 01:30:17.447801  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.447812  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:17.447819  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:17.447885  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:17.498416  142411 cri.go:89] found id: ""
	I0420 01:30:17.498444  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.498454  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:17.498460  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:17.498528  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:17.546025  142411 cri.go:89] found id: ""
	I0420 01:30:17.546055  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.546064  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:17.546072  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:17.546138  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:17.585797  142411 cri.go:89] found id: ""
	I0420 01:30:17.585829  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.585840  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:17.585848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:17.585919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:17.630850  142411 cri.go:89] found id: ""
	I0420 01:30:17.630886  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.630899  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:17.630911  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:17.630926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:17.689472  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:17.689510  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:17.705603  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:17.705642  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:17.794094  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:17.794137  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:17.794155  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:17.879397  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:17.879435  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:20.428142  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:20.444936  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:20.445018  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:20.487317  142411 cri.go:89] found id: ""
	I0420 01:30:20.487354  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.487365  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:20.487373  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:20.487443  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:20.537209  142411 cri.go:89] found id: ""
	I0420 01:30:20.537241  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.537254  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:20.537262  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:20.537348  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:20.584311  142411 cri.go:89] found id: ""
	I0420 01:30:20.584343  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.584352  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:20.584357  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:20.584413  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:20.631915  142411 cri.go:89] found id: ""
	I0420 01:30:20.631948  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.631959  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:20.631969  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:20.632040  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:20.679680  142411 cri.go:89] found id: ""
	I0420 01:30:20.679707  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.679716  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:20.679721  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:20.679770  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:20.724967  142411 cri.go:89] found id: ""
	I0420 01:30:20.725002  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.725013  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:20.725027  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:20.725091  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:20.772717  142411 cri.go:89] found id: ""
	I0420 01:30:20.772751  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.772762  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:20.772771  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:20.772837  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:20.812421  142411 cri.go:89] found id: ""
	I0420 01:30:20.812449  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.812460  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:20.812471  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:20.812485  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:20.870522  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:20.870554  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:20.886764  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:20.886793  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:20.963941  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:20.963964  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:20.963979  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:21.045738  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:21.045778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:23.600037  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:23.616539  142411 kubeadm.go:591] duration metric: took 4m4.142686832s to restartPrimaryControlPlane
	W0420 01:30:23.616641  142411 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:30:23.616676  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:30:25.481285  142411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.864573977s)
	I0420 01:30:25.481385  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:25.500950  142411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:25.518624  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:25.532506  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:25.532531  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:25.532584  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:30:25.546634  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:25.546708  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:25.561379  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:30:25.575506  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:25.575627  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:25.590615  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:30:25.604855  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:25.604923  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:25.619717  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:30:25.634525  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:25.634607  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:25.649408  142411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:25.735636  142411 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:30:25.735697  142411 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:25.913199  142411 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:25.913347  142411 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:25.913483  142411 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:26.120240  142411 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:26.122066  142411 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:26.122169  142411 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:26.122279  142411 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:26.122395  142411 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:26.122499  142411 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:26.122623  142411 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:26.122715  142411 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:26.122806  142411 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:26.122898  142411 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:26.122999  142411 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:26.123113  142411 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:26.123173  142411 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:26.123244  142411 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:26.243908  142411 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:26.354349  142411 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:26.605778  142411 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:26.833914  142411 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:26.855348  142411 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:26.857029  142411 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:26.857250  142411 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:27.010707  142411 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:27.012314  142411 out.go:204]   - Booting up control plane ...
	I0420 01:30:27.012456  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:27.036284  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:27.049123  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:27.050561  142411 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:27.053222  142411 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:31:07.054009  142411 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:31:07.054375  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:07.054708  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:12.055506  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:12.055793  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:22.056094  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:22.056315  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:42.057024  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:42.057278  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:32:22.058965  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:32:22.059213  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:32:22.059231  142411 kubeadm.go:309] 
	I0420 01:32:22.059284  142411 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:32:22.059341  142411 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:32:22.059351  142411 kubeadm.go:309] 
	I0420 01:32:22.059398  142411 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:32:22.059449  142411 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:32:22.059581  142411 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:32:22.059606  142411 kubeadm.go:309] 
	I0420 01:32:22.059693  142411 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:32:22.059725  142411 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:32:22.059796  142411 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:32:22.059821  142411 kubeadm.go:309] 
	I0420 01:32:22.059916  142411 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:32:22.060046  142411 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:32:22.060068  142411 kubeadm.go:309] 
	I0420 01:32:22.060225  142411 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:32:22.060371  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:32:22.060498  142411 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:32:22.060624  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:32:22.060643  142411 kubeadm.go:309] 
	I0420 01:32:22.061155  142411 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:22.061294  142411 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:32:22.061403  142411 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0420 01:32:22.061569  142411 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0420 01:32:22.061628  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:32:23.211059  142411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.149398853s)
	I0420 01:32:23.211147  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:23.228140  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:32:23.240832  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:32:23.240868  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:32:23.240912  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:32:23.252674  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:32:23.252735  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:32:23.264128  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:32:23.274998  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:32:23.275059  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:32:23.286449  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.297377  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:32:23.297452  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.308971  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:32:23.320775  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:32:23.320842  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:32:23.333601  142411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:32:23.603058  142411 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:34:20.028550  142411 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:34:20.028769  142411 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0420 01:34:20.030749  142411 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:34:20.030826  142411 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:34:20.030947  142411 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:34:20.031078  142411 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:34:20.031217  142411 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:34:20.031319  142411 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:34:20.032927  142411 out.go:204]   - Generating certificates and keys ...
	I0420 01:34:20.033024  142411 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:34:20.033110  142411 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:34:20.033211  142411 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:34:20.033286  142411 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:34:20.033410  142411 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:34:20.033496  142411 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:34:20.033597  142411 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:34:20.033695  142411 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:34:20.033805  142411 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:34:20.033921  142411 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:34:20.033972  142411 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:34:20.034042  142411 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:34:20.034125  142411 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:34:20.034200  142411 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:34:20.034287  142411 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:34:20.034355  142411 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:34:20.034510  142411 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:34:20.034614  142411 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:34:20.034680  142411 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:34:20.034760  142411 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:34:20.036300  142411 out.go:204]   - Booting up control plane ...
	I0420 01:34:20.036380  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:34:20.036479  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:34:20.036583  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:34:20.036705  142411 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:34:20.036888  142411 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:34:20.036955  142411 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:34:20.037046  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037228  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037291  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037494  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037576  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037730  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037789  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037977  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.038044  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.038262  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.038284  142411 kubeadm.go:309] 
	I0420 01:34:20.038341  142411 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:34:20.038382  142411 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:34:20.038396  142411 kubeadm.go:309] 
	I0420 01:34:20.038443  142411 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:34:20.038476  142411 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:34:20.038612  142411 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:34:20.038625  142411 kubeadm.go:309] 
	I0420 01:34:20.038735  142411 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:34:20.038767  142411 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:34:20.038794  142411 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:34:20.038808  142411 kubeadm.go:309] 
	I0420 01:34:20.038902  142411 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:34:20.038977  142411 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:34:20.038987  142411 kubeadm.go:309] 
	I0420 01:34:20.039101  142411 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:34:20.039203  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:34:20.039274  142411 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:34:20.039342  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:34:20.039384  142411 kubeadm.go:309] 
	I0420 01:34:20.039417  142411 kubeadm.go:393] duration metric: took 8m0.622979268s to StartCluster
	I0420 01:34:20.039459  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:34:20.039514  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:34:20.090236  142411 cri.go:89] found id: ""
	I0420 01:34:20.090262  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.090270  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:34:20.090276  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:34:20.090331  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:34:20.133841  142411 cri.go:89] found id: ""
	I0420 01:34:20.133867  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.133875  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:34:20.133883  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:34:20.133955  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:34:20.176186  142411 cri.go:89] found id: ""
	I0420 01:34:20.176219  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.176230  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:34:20.176235  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:34:20.176295  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:34:20.214895  142411 cri.go:89] found id: ""
	I0420 01:34:20.214932  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.214944  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:34:20.214951  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:34:20.215018  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:34:20.257759  142411 cri.go:89] found id: ""
	I0420 01:34:20.257786  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.257795  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:34:20.257800  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:34:20.257857  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:34:20.298111  142411 cri.go:89] found id: ""
	I0420 01:34:20.298153  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.298164  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:34:20.298172  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:34:20.298226  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:34:20.333435  142411 cri.go:89] found id: ""
	I0420 01:34:20.333469  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.333481  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:34:20.333489  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:34:20.333554  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:34:20.370848  142411 cri.go:89] found id: ""
	I0420 01:34:20.370872  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.370880  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:34:20.370890  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:34:20.370902  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:34:20.425495  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:34:20.425536  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:34:20.442039  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:34:20.442066  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:34:20.523456  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:34:20.523483  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:34:20.523504  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:34:20.633387  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:34:20.633427  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0420 01:34:20.688731  142411 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0420 01:34:20.688783  142411 out.go:239] * 
	* 
	W0420 01:34:20.688839  142411 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:34:20.688862  142411 out.go:239] * 
	* 
	W0420 01:34:20.689758  142411 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0420 01:34:20.693376  142411 out.go:177] 
	W0420 01:34:20.694909  142411 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:34:20.694971  142411 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0420 01:34:20.695003  142411 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0420 01:34:20.696409  142411 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-564860 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-564860 -n old-k8s-version-564860
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-564860 -n old-k8s-version-564860: exit status 2 (272.203182ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-564860 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-564860 logs -n 25: (1.595013952s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-831611                               | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-831611                               | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-172352 | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | disable-driver-mounts-172352                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:17 UTC |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-338118             | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:17 UTC | 20 Apr 24 01:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-907988  | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC | 20 Apr 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC |                     |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-269507            | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC | 20 Apr 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-564860        | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-338118                  | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-907988       | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:30 UTC |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-269507                 | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-564860             | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 01:21:33
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 01:21:33.400343  142411 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:21:33.400444  142411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:21:33.400452  142411 out.go:304] Setting ErrFile to fd 2...
	I0420 01:21:33.400464  142411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:21:33.400681  142411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:21:33.401213  142411 out.go:298] Setting JSON to false
	I0420 01:21:33.402151  142411 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14640,"bootTime":1713561453,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 01:21:33.402214  142411 start.go:139] virtualization: kvm guest
	I0420 01:21:33.404200  142411 out.go:177] * [old-k8s-version-564860] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 01:21:33.405933  142411 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:21:33.407240  142411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:21:33.405946  142411 notify.go:220] Checking for updates...
	I0420 01:21:33.408693  142411 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:21:33.409906  142411 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:21:33.411155  142411 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 01:21:33.412528  142411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:21:33.414062  142411 config.go:182] Loaded profile config "old-k8s-version-564860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:21:33.414460  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:21:33.414524  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:21:33.428987  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37585
	I0420 01:21:33.429348  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:21:33.429850  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:21:33.429873  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:21:33.430178  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:21:33.430370  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:21:33.431825  142411 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0420 01:21:33.432895  142411 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:21:33.433209  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:21:33.433251  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:21:33.447157  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42815
	I0420 01:21:33.447543  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:21:33.448080  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:21:33.448123  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:21:33.448444  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:21:33.448609  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:21:33.481664  142411 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 01:21:33.482784  142411 start.go:297] selected driver: kvm2
	I0420 01:21:33.482796  142411 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-5
64860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:21:33.482903  142411 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:21:33.483572  142411 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:21:33.483646  142411 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 01:21:33.497421  142411 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 01:21:33.497790  142411 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:21:33.497854  142411 cni.go:84] Creating CNI manager for ""
	I0420 01:21:33.497869  142411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:21:33.497915  142411 start.go:340] cluster config:
	{Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:21:33.498027  142411 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:21:33.499624  142411 out.go:177] * Starting "old-k8s-version-564860" primary control-plane node in "old-k8s-version-564860" cluster
	I0420 01:21:33.500874  142411 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:21:33.500901  142411 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0420 01:21:33.500914  142411 cache.go:56] Caching tarball of preloaded images
	I0420 01:21:33.500992  142411 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 01:21:33.501007  142411 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0420 01:21:33.501116  142411 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/config.json ...
	I0420 01:21:33.501613  142411 start.go:360] acquireMachinesLock for old-k8s-version-564860: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:21:35.817529  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:38.889617  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:44.969590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:48.041555  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:54.121550  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:57.193604  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:03.273575  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:06.345487  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:12.425567  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:15.497538  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:21.577563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:24.649534  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:30.729573  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:33.801566  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:39.881590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:42.953591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:49.033641  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:52.105579  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:58.185591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:01.257655  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:07.337585  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:10.409568  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:16.489562  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:19.561602  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:25.641579  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:28.713581  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:34.793618  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:37.865643  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:43.945593  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:47.017561  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:53.097597  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:56.169538  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:02.249561  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:05.321557  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:11.401563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:14.473539  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:20.553591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:23.625573  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:29.705563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:32.777590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:38.857568  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:41.929619  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:48.009565  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:51.081536  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:57.161593  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:25:00.233633  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:25:03.237801  141927 start.go:364] duration metric: took 4m24.096402827s to acquireMachinesLock for "default-k8s-diff-port-907988"
	I0420 01:25:03.237873  141927 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:03.237883  141927 fix.go:54] fixHost starting: 
	I0420 01:25:03.238412  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:03.238453  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:03.254029  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36295
	I0420 01:25:03.254570  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:03.255071  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:25:03.255097  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:03.255474  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:03.255703  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:03.255871  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:25:03.257395  141927 fix.go:112] recreateIfNeeded on default-k8s-diff-port-907988: state=Stopped err=<nil>
	I0420 01:25:03.257430  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	W0420 01:25:03.257577  141927 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:03.259083  141927 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-907988" ...
	I0420 01:25:03.260199  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Start
	I0420 01:25:03.260402  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring networks are active...
	I0420 01:25:03.261176  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring network default is active
	I0420 01:25:03.261553  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring network mk-default-k8s-diff-port-907988 is active
	I0420 01:25:03.262016  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Getting domain xml...
	I0420 01:25:03.262834  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Creating domain...
	I0420 01:25:03.235208  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:03.235275  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:25:03.235620  141746 buildroot.go:166] provisioning hostname "no-preload-338118"
	I0420 01:25:03.235653  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:25:03.235902  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:25:03.237636  141746 machine.go:97] duration metric: took 4m37.412949021s to provisionDockerMachine
	I0420 01:25:03.237677  141746 fix.go:56] duration metric: took 4m37.433896084s for fixHost
	I0420 01:25:03.237685  141746 start.go:83] releasing machines lock for "no-preload-338118", held for 4m37.433927307s
	W0420 01:25:03.237715  141746 start.go:713] error starting host: provision: host is not running
	W0420 01:25:03.237980  141746 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0420 01:25:03.238076  141746 start.go:728] Will try again in 5 seconds ...
	I0420 01:25:04.453535  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting to get IP...
	I0420 01:25:04.454427  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.454803  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.454886  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.454785  143129 retry.go:31] will retry after 205.593849ms: waiting for machine to come up
	I0420 01:25:04.662560  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.663106  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.663133  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.663007  143129 retry.go:31] will retry after 246.821866ms: waiting for machine to come up
	I0420 01:25:04.911578  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.912067  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.912100  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.912014  143129 retry.go:31] will retry after 478.36287ms: waiting for machine to come up
	I0420 01:25:05.391624  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.392018  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.392063  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:05.391965  143129 retry.go:31] will retry after 495.387005ms: waiting for machine to come up
	I0420 01:25:05.888569  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.889093  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.889116  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:05.889009  143129 retry.go:31] will retry after 721.867239ms: waiting for machine to come up
	I0420 01:25:06.613018  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:06.613550  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:06.613583  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:06.613495  143129 retry.go:31] will retry after 724.502229ms: waiting for machine to come up
	I0420 01:25:07.339473  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:07.339924  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:07.339974  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:07.339883  143129 retry.go:31] will retry after 916.936196ms: waiting for machine to come up
	I0420 01:25:08.258657  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:08.259033  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:08.259064  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:08.258981  143129 retry.go:31] will retry after 1.088675043s: waiting for machine to come up
	I0420 01:25:08.239597  141746 start.go:360] acquireMachinesLock for no-preload-338118: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:25:09.349021  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:09.349421  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:09.349453  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:09.349362  143129 retry.go:31] will retry after 1.139610002s: waiting for machine to come up
	I0420 01:25:10.490715  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:10.491162  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:10.491190  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:10.491119  143129 retry.go:31] will retry after 1.625829976s: waiting for machine to come up
	I0420 01:25:12.118751  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:12.119231  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:12.119254  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:12.119184  143129 retry.go:31] will retry after 2.904309002s: waiting for machine to come up
	I0420 01:25:15.025713  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:15.026281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:15.026310  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:15.026227  143129 retry.go:31] will retry after 3.471792967s: waiting for machine to come up
	I0420 01:25:18.500247  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:18.500626  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:18.500679  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:18.500595  143129 retry.go:31] will retry after 4.499766051s: waiting for machine to come up
	I0420 01:25:23.005446  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.005935  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Found IP for machine: 192.168.39.222
	I0420 01:25:23.005956  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Reserving static IP address...
	I0420 01:25:23.005970  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has current primary IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.006453  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-907988", mac: "52:54:00:c7:22:6d", ip: "192.168.39.222"} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.006479  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Reserved static IP address: 192.168.39.222
	I0420 01:25:23.006513  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | skip adding static IP to network mk-default-k8s-diff-port-907988 - found existing host DHCP lease matching {name: "default-k8s-diff-port-907988", mac: "52:54:00:c7:22:6d", ip: "192.168.39.222"}
	I0420 01:25:23.006537  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for SSH to be available...
	I0420 01:25:23.006544  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Getting to WaitForSSH function...
	I0420 01:25:23.009090  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.009505  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.009537  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.009658  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Using SSH client type: external
	I0420 01:25:23.009695  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa (-rw-------)
	I0420 01:25:23.009732  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:25:23.009748  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | About to run SSH command:
	I0420 01:25:23.009766  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | exit 0
	I0420 01:25:23.133489  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | SSH cmd err, output: <nil>: 
	I0420 01:25:23.133940  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetConfigRaw
	I0420 01:25:23.134589  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:23.137340  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.137685  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.137708  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.138000  141927 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/config.json ...
	I0420 01:25:23.138228  141927 machine.go:94] provisionDockerMachine start ...
	I0420 01:25:23.138253  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:23.138461  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.140536  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.140815  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.140841  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.141024  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.141244  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.141450  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.141595  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.141777  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.142053  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.142067  141927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:25:23.249946  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:25:23.249979  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.250250  141927 buildroot.go:166] provisioning hostname "default-k8s-diff-port-907988"
	I0420 01:25:23.250280  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.250483  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.253030  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.253422  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.253456  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.253564  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.253755  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.253978  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.254135  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.254334  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.254504  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.254517  141927 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-907988 && echo "default-k8s-diff-port-907988" | sudo tee /etc/hostname
	I0420 01:25:23.379061  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-907988
	
	I0420 01:25:23.379092  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.381893  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.382249  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.382278  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.382465  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.382666  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.382831  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.382939  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.383118  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.383324  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.383349  141927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-907988' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-907988/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-907988' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:25:23.499869  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:23.499903  141927 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:25:23.499932  141927 buildroot.go:174] setting up certificates
	I0420 01:25:23.499941  141927 provision.go:84] configureAuth start
	I0420 01:25:23.499950  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.500178  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:23.502735  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.503050  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.503085  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.503201  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.505586  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.505924  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.505968  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.506036  141927 provision.go:143] copyHostCerts
	I0420 01:25:23.506136  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:25:23.506150  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:25:23.506233  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:25:23.506386  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:25:23.506396  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:25:23.506444  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:25:23.506525  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:25:23.506536  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:25:23.506569  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:25:23.506640  141927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-907988 san=[127.0.0.1 192.168.39.222 default-k8s-diff-port-907988 localhost minikube]
	I0420 01:25:23.598855  141927 provision.go:177] copyRemoteCerts
	I0420 01:25:23.598930  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:25:23.598967  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.602183  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.602516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.602544  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.602696  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.602903  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.603143  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.603301  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:23.688294  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:25:23.714719  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0420 01:25:23.744530  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:25:23.774733  141927 provision.go:87] duration metric: took 274.778779ms to configureAuth
	I0420 01:25:23.774756  141927 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:25:23.774990  141927 config.go:182] Loaded profile config "default-k8s-diff-port-907988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:25:23.775083  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.777817  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.778179  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.778213  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.778376  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.778596  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.778763  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.778984  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.779167  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.779364  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.779393  141927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:25:24.314463  142057 start.go:364] duration metric: took 4m32.915907541s to acquireMachinesLock for "embed-certs-269507"
	I0420 01:25:24.314618  142057 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:24.314645  142057 fix.go:54] fixHost starting: 
	I0420 01:25:24.315169  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:24.315220  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:24.331820  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43949
	I0420 01:25:24.332243  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:24.332707  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:25:24.332730  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:24.333157  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:24.333371  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:24.333551  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:25:24.335004  142057 fix.go:112] recreateIfNeeded on embed-certs-269507: state=Stopped err=<nil>
	I0420 01:25:24.335044  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	W0420 01:25:24.335211  142057 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:24.337246  142057 out.go:177] * Restarting existing kvm2 VM for "embed-certs-269507" ...
	I0420 01:25:24.056795  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:25:24.056832  141927 machine.go:97] duration metric: took 918.585863ms to provisionDockerMachine
	I0420 01:25:24.056849  141927 start.go:293] postStartSetup for "default-k8s-diff-port-907988" (driver="kvm2")
	I0420 01:25:24.056865  141927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:25:24.056889  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.057250  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:25:24.057281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.060602  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.060992  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.061028  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.061196  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.061422  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.061631  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.061785  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.152109  141927 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:25:24.157292  141927 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:25:24.157330  141927 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:25:24.157397  141927 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:25:24.157490  141927 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:25:24.157606  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:25:24.171039  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:24.201343  141927 start.go:296] duration metric: took 144.476748ms for postStartSetup
	I0420 01:25:24.201383  141927 fix.go:56] duration metric: took 20.963499628s for fixHost
	I0420 01:25:24.201409  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.204283  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.204648  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.204681  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.204842  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.205022  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.205204  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.205411  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.205732  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:24.206255  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:24.206269  141927 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:25:24.314311  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576324.296261493
	
	I0420 01:25:24.314336  141927 fix.go:216] guest clock: 1713576324.296261493
	I0420 01:25:24.314346  141927 fix.go:229] Guest: 2024-04-20 01:25:24.296261493 +0000 UTC Remote: 2024-04-20 01:25:24.201388226 +0000 UTC m=+285.207728057 (delta=94.873267ms)
	I0420 01:25:24.314373  141927 fix.go:200] guest clock delta is within tolerance: 94.873267ms
	I0420 01:25:24.314380  141927 start.go:83] releasing machines lock for "default-k8s-diff-port-907988", held for 21.076529311s
	I0420 01:25:24.314420  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.314699  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:24.317281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.317696  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.317731  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.317858  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318364  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318557  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318664  141927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:25:24.318723  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.318833  141927 ssh_runner.go:195] Run: cat /version.json
	I0420 01:25:24.318862  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.321519  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321572  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321937  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.321968  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321994  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.322011  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.322121  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.322233  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.322323  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.322502  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.322516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.322725  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.322730  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.322871  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.403742  141927 ssh_runner.go:195] Run: systemctl --version
	I0420 01:25:24.429207  141927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:25:24.590621  141927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:25:24.597818  141927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:25:24.597890  141927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:25:24.617031  141927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:25:24.617050  141927 start.go:494] detecting cgroup driver to use...
	I0420 01:25:24.617126  141927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:25:24.643134  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:25:24.658222  141927 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:25:24.658275  141927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:25:24.672409  141927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:25:24.686722  141927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:25:24.810871  141927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:25:24.965702  141927 docker.go:233] disabling docker service ...
	I0420 01:25:24.965765  141927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:25:24.984504  141927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:25:24.999580  141927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:25:25.151023  141927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:25:25.278443  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:25:25.295439  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:25:25.316425  141927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:25:25.316494  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.329052  141927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:25:25.329119  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.342102  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.354831  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.368084  141927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:25:25.380515  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.392952  141927 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.411707  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.423776  141927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:25:25.434175  141927 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:25:25.434234  141927 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:25:25.449180  141927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:25:25.460018  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:25.579669  141927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:25:25.741777  141927 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:25:25.741854  141927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:25:25.747422  141927 start.go:562] Will wait 60s for crictl version
	I0420 01:25:25.747478  141927 ssh_runner.go:195] Run: which crictl
	I0420 01:25:25.752164  141927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:25:25.800400  141927 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:25:25.800491  141927 ssh_runner.go:195] Run: crio --version
	I0420 01:25:25.832099  141927 ssh_runner.go:195] Run: crio --version
	I0420 01:25:25.865692  141927 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:25:24.338547  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Start
	I0420 01:25:24.338743  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring networks are active...
	I0420 01:25:24.339527  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring network default is active
	I0420 01:25:24.340064  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring network mk-embed-certs-269507 is active
	I0420 01:25:24.340520  142057 main.go:141] libmachine: (embed-certs-269507) Getting domain xml...
	I0420 01:25:24.341363  142057 main.go:141] libmachine: (embed-certs-269507) Creating domain...
	I0420 01:25:25.566725  142057 main.go:141] libmachine: (embed-certs-269507) Waiting to get IP...
	I0420 01:25:25.567704  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:25.568195  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:25.568263  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:25.568160  143271 retry.go:31] will retry after 229.672507ms: waiting for machine to come up
	I0420 01:25:25.799515  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:25.799964  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:25.799994  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:25.799916  143271 retry.go:31] will retry after 352.048372ms: waiting for machine to come up
	I0420 01:25:26.153710  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:26.154217  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:26.154245  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:26.154159  143271 retry.go:31] will retry after 451.404487ms: waiting for machine to come up
	I0420 01:25:25.867283  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:25.870225  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:25.870725  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:25.870748  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:25.871001  141927 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0420 01:25:25.875986  141927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:25.890923  141927 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-907988 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907
988 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:25:25.891043  141927 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:25:25.891088  141927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:25.934665  141927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:25:25.934743  141927 ssh_runner.go:195] Run: which lz4
	I0420 01:25:25.939157  141927 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:25:25.943759  141927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:25:25.943788  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:25:27.674416  141927 crio.go:462] duration metric: took 1.735279369s to copy over tarball
	I0420 01:25:27.674484  141927 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:25:26.607751  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:26.608327  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:26.608362  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:26.608273  143271 retry.go:31] will retry after 548.149542ms: waiting for machine to come up
	I0420 01:25:27.157746  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:27.158193  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:27.158220  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:27.158158  143271 retry.go:31] will retry after 543.066807ms: waiting for machine to come up
	I0420 01:25:27.702417  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:27.702812  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:27.702842  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:27.702778  143271 retry.go:31] will retry after 801.842999ms: waiting for machine to come up
	I0420 01:25:28.505673  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:28.506233  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:28.506264  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:28.506169  143271 retry.go:31] will retry after 1.176665861s: waiting for machine to come up
	I0420 01:25:29.684134  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:29.684642  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:29.684676  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:29.684582  143271 retry.go:31] will retry after 1.09397916s: waiting for machine to come up
	I0420 01:25:30.780467  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:30.780962  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:30.780987  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:30.780924  143271 retry.go:31] will retry after 1.560706704s: waiting for machine to come up
	I0420 01:25:30.280138  141927 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.605620888s)
	I0420 01:25:30.280235  141927 crio.go:469] duration metric: took 2.605784372s to extract the tarball
	I0420 01:25:30.280269  141927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:25:30.323590  141927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:30.384053  141927 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:25:30.384083  141927 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:25:30.384094  141927 kubeadm.go:928] updating node { 192.168.39.222 8444 v1.30.0 crio true true} ...
	I0420 01:25:30.384258  141927 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-907988 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907988 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:25:30.384347  141927 ssh_runner.go:195] Run: crio config
	I0420 01:25:30.431033  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:25:30.431059  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:30.431074  141927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:25:30.431094  141927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.222 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-907988 NodeName:default-k8s-diff-port-907988 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:25:30.431267  141927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.222
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-907988"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:25:30.431327  141927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:25:30.444735  141927 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:25:30.444807  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:25:30.457543  141927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0420 01:25:30.477858  141927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:25:30.497632  141927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0420 01:25:30.518062  141927 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I0420 01:25:30.522820  141927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:30.538677  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:30.686290  141927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:25:30.721316  141927 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988 for IP: 192.168.39.222
	I0420 01:25:30.721342  141927 certs.go:194] generating shared ca certs ...
	I0420 01:25:30.721373  141927 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:25:30.721607  141927 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:25:30.721664  141927 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:25:30.721679  141927 certs.go:256] generating profile certs ...
	I0420 01:25:30.721789  141927 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/client.key
	I0420 01:25:30.721873  141927 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.key.b8de10ae
	I0420 01:25:30.721912  141927 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.key
	I0420 01:25:30.722019  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:25:30.722052  141927 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:25:30.722067  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:25:30.722094  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:25:30.722122  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:25:30.722144  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:25:30.722189  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:30.723048  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:25:30.762666  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:25:30.800218  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:25:30.849282  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:25:30.893355  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0420 01:25:30.924642  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:25:30.956734  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:25:30.986491  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:25:31.015876  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:25:31.043860  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:25:31.073822  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:25:31.100731  141927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:25:31.119908  141927 ssh_runner.go:195] Run: openssl version
	I0420 01:25:31.128209  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:25:31.140164  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.145371  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.145432  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.151726  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:25:31.163371  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:25:31.175115  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.180237  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.180286  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.186548  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:25:31.198703  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:25:31.211529  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.217258  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.217326  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.223822  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:25:31.236363  141927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:25:31.241793  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:25:31.250826  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:25:31.259850  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:25:31.267387  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:25:31.274477  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:25:31.281452  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:25:31.287980  141927 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-907988 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907988
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:25:31.288094  141927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:25:31.288159  141927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:31.344552  141927 cri.go:89] found id: ""
	I0420 01:25:31.344646  141927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:25:31.357049  141927 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:25:31.357075  141927 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:25:31.357081  141927 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:25:31.357147  141927 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:25:31.368636  141927 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:25:31.370055  141927 kubeconfig.go:125] found "default-k8s-diff-port-907988" server: "https://192.168.39.222:8444"
	I0420 01:25:31.373063  141927 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:25:31.384821  141927 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.222
	I0420 01:25:31.384861  141927 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:25:31.384876  141927 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:25:31.384946  141927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:31.432801  141927 cri.go:89] found id: ""
	I0420 01:25:31.432902  141927 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:25:31.458842  141927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:25:31.472706  141927 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:25:31.472728  141927 kubeadm.go:156] found existing configuration files:
	
	I0420 01:25:31.472780  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0420 01:25:31.486221  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:25:31.486276  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:25:31.500036  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0420 01:25:31.510180  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:25:31.510237  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:25:31.520560  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0420 01:25:31.530333  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:25:31.530387  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:25:31.541053  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0420 01:25:31.551200  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:25:31.551257  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:25:31.561364  141927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:25:31.572967  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:31.690537  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.319980  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.546554  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.631937  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.729738  141927 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:25:32.729838  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.230769  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.730452  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.807772  141927 api_server.go:72] duration metric: took 1.07803345s to wait for apiserver process to appear ...
	I0420 01:25:33.807805  141927 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:25:33.807829  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:33.808551  141927 api_server.go:269] stopped: https://192.168.39.222:8444/healthz: Get "https://192.168.39.222:8444/healthz": dial tcp 192.168.39.222:8444: connect: connection refused
	I0420 01:25:32.342951  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:32.343373  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:32.343420  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:32.343352  143271 retry.go:31] will retry after 1.871100952s: waiting for machine to come up
	I0420 01:25:34.215884  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:34.216313  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:34.216341  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:34.216253  143271 retry.go:31] will retry after 2.017753728s: waiting for machine to come up
	I0420 01:25:36.237296  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:36.237906  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:36.237936  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:36.237856  143271 retry.go:31] will retry after 3.431912056s: waiting for machine to come up
	I0420 01:25:34.308465  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.098889  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:37.098928  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:37.098945  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.149496  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:37.149534  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:37.308936  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.313975  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:37.314005  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:37.808680  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.818747  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:37.818784  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:38.307905  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:38.318528  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:38.318563  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:38.808127  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:38.816135  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:38.816167  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:39.307985  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:39.313712  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:39.313753  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:39.808225  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:39.812825  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:39.812858  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:40.308366  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:40.312930  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:40.312970  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:40.808320  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:40.812979  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 200:
	ok
	I0420 01:25:40.820265  141927 api_server.go:141] control plane version: v1.30.0
	I0420 01:25:40.820289  141927 api_server.go:131] duration metric: took 7.012476869s to wait for apiserver health ...
	I0420 01:25:40.820298  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:25:40.820304  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:40.822367  141927 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:25:39.671070  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:39.671556  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:39.671614  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:39.671502  143271 retry.go:31] will retry after 3.954438708s: waiting for machine to come up
	I0420 01:25:40.823843  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:25:40.837960  141927 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:25:40.858294  141927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:25:40.867542  141927 system_pods.go:59] 8 kube-system pods found
	I0420 01:25:40.867577  141927 system_pods.go:61] "coredns-7db6d8ff4d-7v886" [0e0b3a5f-041a-4bbc-94aa-c9571a8761ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:25:40.867584  141927 system_pods.go:61] "etcd-default-k8s-diff-port-907988" [88f687c4-8865-4fe6-92f1-448cfde6117c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:25:40.867590  141927 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-907988" [2c9f0d90-35c6-45ad-b9b1-9504c55a1e18] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:25:40.867597  141927 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-907988" [949ce449-06b4-4650-8ba0-7567637d6aec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:25:40.867604  141927 system_pods.go:61] "kube-proxy-dg6xn" [1124d9e8-41aa-44a9-8a4a-eafd2cd6c6c9] Running
	I0420 01:25:40.867626  141927 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-907988" [df93de11-c23d-4f5d-afd4-1af7928933fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0420 01:25:40.867640  141927 system_pods.go:61] "metrics-server-569cc877fc-rqqlt" [2c7d91c3-fce8-4603-a7be-8d9b415d71f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:25:40.867647  141927 system_pods.go:61] "storage-provisioner" [af4dc99d-feef-4c24-852a-4c8cad22dd7d] Running
	I0420 01:25:40.867654  141927 system_pods.go:74] duration metric: took 9.33485ms to wait for pod list to return data ...
	I0420 01:25:40.867670  141927 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:25:40.871045  141927 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:25:40.871067  141927 node_conditions.go:123] node cpu capacity is 2
	I0420 01:25:40.871078  141927 node_conditions.go:105] duration metric: took 3.402743ms to run NodePressure ...
	I0420 01:25:40.871094  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:41.142438  141927 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:25:41.151801  141927 kubeadm.go:733] kubelet initialised
	I0420 01:25:41.151822  141927 kubeadm.go:734] duration metric: took 9.359538ms waiting for restarted kubelet to initialise ...
	I0420 01:25:41.151830  141927 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:25:41.160583  141927 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.169184  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.169214  141927 pod_ready.go:81] duration metric: took 8.596607ms for pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.169226  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.169234  141927 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.175518  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.175544  141927 pod_ready.go:81] duration metric: took 6.298273ms for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.175558  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.175567  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.189038  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.189062  141927 pod_ready.go:81] duration metric: took 13.484198ms for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.189072  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.189078  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.261162  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.261191  141927 pod_ready.go:81] duration metric: took 72.106763ms for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.261203  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.261210  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dg6xn" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.662532  141927 pod_ready.go:92] pod "kube-proxy-dg6xn" in "kube-system" namespace has status "Ready":"True"
	I0420 01:25:41.662553  141927 pod_ready.go:81] duration metric: took 401.337101ms for pod "kube-proxy-dg6xn" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.662562  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:43.670281  141927 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:45.122924  142411 start.go:364] duration metric: took 4m11.621269498s to acquireMachinesLock for "old-k8s-version-564860"
	I0420 01:25:45.122996  142411 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:45.123018  142411 fix.go:54] fixHost starting: 
	I0420 01:25:45.123538  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:45.123581  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:45.141340  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0420 01:25:45.141873  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:45.142555  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:25:45.142592  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:45.142979  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:45.143234  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:25:45.143426  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetState
	I0420 01:25:45.145067  142411 fix.go:112] recreateIfNeeded on old-k8s-version-564860: state=Stopped err=<nil>
	I0420 01:25:45.145114  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	W0420 01:25:45.145289  142411 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:45.147498  142411 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-564860" ...
	I0420 01:25:43.630616  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.631126  142057 main.go:141] libmachine: (embed-certs-269507) Found IP for machine: 192.168.50.184
	I0420 01:25:43.631159  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has current primary IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.631173  142057 main.go:141] libmachine: (embed-certs-269507) Reserving static IP address...
	I0420 01:25:43.631625  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "embed-certs-269507", mac: "52:54:00:5d:0f:ba", ip: "192.168.50.184"} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.631677  142057 main.go:141] libmachine: (embed-certs-269507) DBG | skip adding static IP to network mk-embed-certs-269507 - found existing host DHCP lease matching {name: "embed-certs-269507", mac: "52:54:00:5d:0f:ba", ip: "192.168.50.184"}
	I0420 01:25:43.631692  142057 main.go:141] libmachine: (embed-certs-269507) Reserved static IP address: 192.168.50.184
	I0420 01:25:43.631710  142057 main.go:141] libmachine: (embed-certs-269507) Waiting for SSH to be available...
	I0420 01:25:43.631731  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Getting to WaitForSSH function...
	I0420 01:25:43.634292  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.634614  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.634650  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.634833  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Using SSH client type: external
	I0420 01:25:43.634883  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa (-rw-------)
	I0420 01:25:43.634916  142057 main.go:141] libmachine: (embed-certs-269507) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:25:43.634935  142057 main.go:141] libmachine: (embed-certs-269507) DBG | About to run SSH command:
	I0420 01:25:43.634949  142057 main.go:141] libmachine: (embed-certs-269507) DBG | exit 0
	I0420 01:25:43.757712  142057 main.go:141] libmachine: (embed-certs-269507) DBG | SSH cmd err, output: <nil>: 
	I0420 01:25:43.758118  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetConfigRaw
	I0420 01:25:43.758820  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:43.761626  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.762007  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.762083  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.762328  142057 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/config.json ...
	I0420 01:25:43.762556  142057 machine.go:94] provisionDockerMachine start ...
	I0420 01:25:43.762575  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:43.762827  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:43.765841  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.766277  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.766304  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.766461  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:43.766636  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.766766  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.766884  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:43.767111  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:43.767371  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:43.767386  142057 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:25:43.874709  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:25:43.874741  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:43.875018  142057 buildroot.go:166] provisioning hostname "embed-certs-269507"
	I0420 01:25:43.875052  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:43.875265  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:43.878226  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.878645  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.878675  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.878767  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:43.878976  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.879120  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.879246  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:43.879375  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:43.879585  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:43.879613  142057 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-269507 && echo "embed-certs-269507" | sudo tee /etc/hostname
	I0420 01:25:44.003458  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-269507
	
	I0420 01:25:44.003502  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.006277  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.006706  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.006745  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.006922  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.007227  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.007417  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.007604  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.007772  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:44.007959  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:44.007979  142057 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-269507' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-269507/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-269507' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:25:44.124457  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:44.124494  142057 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:25:44.124516  142057 buildroot.go:174] setting up certificates
	I0420 01:25:44.124526  142057 provision.go:84] configureAuth start
	I0420 01:25:44.124537  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:44.124850  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:44.127589  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.127958  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.127980  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.128196  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.130485  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.130792  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.130830  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.130992  142057 provision.go:143] copyHostCerts
	I0420 01:25:44.131060  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:25:44.131075  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:25:44.131132  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:25:44.131237  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:25:44.131246  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:25:44.131266  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:25:44.131326  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:25:44.131333  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:25:44.131349  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:25:44.131397  142057 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.embed-certs-269507 san=[127.0.0.1 192.168.50.184 embed-certs-269507 localhost minikube]
	I0420 01:25:44.404404  142057 provision.go:177] copyRemoteCerts
	I0420 01:25:44.404469  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:25:44.404498  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.407318  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.407650  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.407683  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.407850  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.408033  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.408182  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.408307  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:44.498069  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:25:44.524979  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0420 01:25:44.553537  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 01:25:44.580307  142057 provision.go:87] duration metric: took 455.767679ms to configureAuth
	I0420 01:25:44.580332  142057 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:25:44.580609  142057 config.go:182] Loaded profile config "embed-certs-269507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:25:44.580722  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.583352  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.583728  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.583761  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.583978  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.584205  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.584383  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.584516  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.584715  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:44.584905  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:44.584926  142057 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:25:44.882565  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:25:44.882599  142057 machine.go:97] duration metric: took 1.120028956s to provisionDockerMachine
	I0420 01:25:44.882612  142057 start.go:293] postStartSetup for "embed-certs-269507" (driver="kvm2")
	I0420 01:25:44.882622  142057 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:25:44.882639  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:44.882971  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:25:44.883012  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.885829  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.886181  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.886208  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.886372  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.886598  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.886761  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.886915  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:44.972428  142057 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:25:44.977228  142057 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:25:44.977257  142057 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:25:44.977344  142057 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:25:44.977435  142057 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:25:44.977552  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:25:44.987372  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:45.014435  142057 start.go:296] duration metric: took 131.807177ms for postStartSetup
	I0420 01:25:45.014484  142057 fix.go:56] duration metric: took 20.699839101s for fixHost
	I0420 01:25:45.014512  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.017361  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.017768  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.017795  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.017943  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.018150  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.018302  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.018421  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.018643  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:45.018815  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:45.018827  142057 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:25:45.122766  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576345.101529100
	
	I0420 01:25:45.122788  142057 fix.go:216] guest clock: 1713576345.101529100
	I0420 01:25:45.122796  142057 fix.go:229] Guest: 2024-04-20 01:25:45.1015291 +0000 UTC Remote: 2024-04-20 01:25:45.014489313 +0000 UTC m=+293.764572165 (delta=87.039787ms)
	I0420 01:25:45.122823  142057 fix.go:200] guest clock delta is within tolerance: 87.039787ms
	I0420 01:25:45.122828  142057 start.go:83] releasing machines lock for "embed-certs-269507", held for 20.808247089s
	I0420 01:25:45.122851  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.123156  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:45.125956  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.126377  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.126408  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.126536  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127059  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127264  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127349  142057 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:25:45.127404  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.127470  142057 ssh_runner.go:195] Run: cat /version.json
	I0420 01:25:45.127497  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.130071  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130393  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130427  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.130447  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130727  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.130825  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.130854  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130932  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.131041  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.131115  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.131220  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:45.131301  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.131451  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.131597  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:45.211824  142057 ssh_runner.go:195] Run: systemctl --version
	I0420 01:25:45.236425  142057 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:25:45.383069  142057 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:25:45.391072  142057 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:25:45.391159  142057 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:25:45.410287  142057 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:25:45.410313  142057 start.go:494] detecting cgroup driver to use...
	I0420 01:25:45.410395  142057 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:25:45.433663  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:25:45.452933  142057 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:25:45.452999  142057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:25:45.473208  142057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:25:45.493261  142057 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:25:45.650111  142057 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:25:45.847482  142057 docker.go:233] disabling docker service ...
	I0420 01:25:45.847559  142057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:25:45.871032  142057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:25:45.892747  142057 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:25:46.076222  142057 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:25:46.218078  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:25:46.236006  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:25:46.259279  142057 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:25:46.259363  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.272573  142057 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:25:46.272647  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.286468  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.298708  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.313197  142057 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:25:46.332844  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.345531  142057 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.367686  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.379702  142057 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:25:46.390491  142057 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:25:46.390558  142057 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:25:46.406027  142057 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:25:46.417370  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:46.543690  142057 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:25:46.725507  142057 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:25:46.725599  142057 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:25:46.734173  142057 start.go:562] Will wait 60s for crictl version
	I0420 01:25:46.734246  142057 ssh_runner.go:195] Run: which crictl
	I0420 01:25:46.740381  142057 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:25:46.801341  142057 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:25:46.801431  142057 ssh_runner.go:195] Run: crio --version
	I0420 01:25:46.843121  142057 ssh_runner.go:195] Run: crio --version
	I0420 01:25:46.889958  142057 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:25:45.148885  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .Start
	I0420 01:25:45.149115  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring networks are active...
	I0420 01:25:45.149856  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring network default is active
	I0420 01:25:45.150205  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring network mk-old-k8s-version-564860 is active
	I0420 01:25:45.150615  142411 main.go:141] libmachine: (old-k8s-version-564860) Getting domain xml...
	I0420 01:25:45.151296  142411 main.go:141] libmachine: (old-k8s-version-564860) Creating domain...
	I0420 01:25:46.465532  142411 main.go:141] libmachine: (old-k8s-version-564860) Waiting to get IP...
	I0420 01:25:46.466816  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.467306  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.467383  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.467288  143434 retry.go:31] will retry after 265.980653ms: waiting for machine to come up
	I0420 01:25:46.735144  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.735676  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.735700  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.735627  143434 retry.go:31] will retry after 254.534112ms: waiting for machine to come up
	I0420 01:25:46.992222  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.992707  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.992738  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.992621  143434 retry.go:31] will retry after 434.179962ms: waiting for machine to come up
	I0420 01:25:47.428397  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:47.428949  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:47.428987  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:47.428899  143434 retry.go:31] will retry after 533.143168ms: waiting for machine to come up
	I0420 01:25:47.963467  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:47.964008  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:47.964035  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:47.963957  143434 retry.go:31] will retry after 601.536298ms: waiting for machine to come up
	I0420 01:25:45.675159  141927 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:48.175457  141927 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:25:48.175487  141927 pod_ready.go:81] duration metric: took 6.512916578s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:48.175499  141927 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:46.891233  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:46.894647  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:46.895107  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:46.895170  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:46.895398  142057 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0420 01:25:46.900604  142057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:46.920025  142057 kubeadm.go:877] updating cluster {Name:embed-certs-269507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:25:46.920184  142057 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:25:46.920247  142057 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:46.967086  142057 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:25:46.967171  142057 ssh_runner.go:195] Run: which lz4
	I0420 01:25:46.973391  142057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:25:46.979210  142057 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:25:46.979241  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:25:48.806615  142057 crio.go:462] duration metric: took 1.83326325s to copy over tarball
	I0420 01:25:48.806701  142057 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:25:48.567922  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:48.568436  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:48.568469  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:48.568387  143434 retry.go:31] will retry after 853.809635ms: waiting for machine to come up
	I0420 01:25:49.423590  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:49.424154  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:49.424178  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:49.424099  143434 retry.go:31] will retry after 1.096859163s: waiting for machine to come up
	I0420 01:25:50.522906  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:50.523406  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:50.523436  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:50.523350  143434 retry.go:31] will retry after 983.057252ms: waiting for machine to come up
	I0420 01:25:51.508033  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:51.508557  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:51.508596  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:51.508497  143434 retry.go:31] will retry after 1.463876638s: waiting for machine to come up
	I0420 01:25:52.974032  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:52.974508  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:52.974536  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:52.974459  143434 retry.go:31] will retry after 1.859889372s: waiting for machine to come up
	I0420 01:25:50.183489  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:53.262055  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:51.389972  142057 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.583237436s)
	I0420 01:25:51.390002  142057 crio.go:469] duration metric: took 2.583356337s to extract the tarball
	I0420 01:25:51.390010  142057 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:25:51.434741  142057 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:51.489945  142057 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:25:51.489974  142057 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:25:51.489984  142057 kubeadm.go:928] updating node { 192.168.50.184 8443 v1.30.0 crio true true} ...
	I0420 01:25:51.490126  142057 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-269507 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:25:51.490226  142057 ssh_runner.go:195] Run: crio config
	I0420 01:25:51.548273  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:25:51.548299  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:51.548316  142057 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:25:51.548356  142057 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-269507 NodeName:embed-certs-269507 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:25:51.548534  142057 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-269507"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:25:51.548614  142057 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:25:51.560359  142057 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:25:51.560428  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:25:51.571609  142057 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0420 01:25:51.594462  142057 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:25:51.621417  142057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0420 01:25:51.649250  142057 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0420 01:25:51.655304  142057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:51.675476  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:51.809652  142057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:25:51.829341  142057 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507 for IP: 192.168.50.184
	I0420 01:25:51.829405  142057 certs.go:194] generating shared ca certs ...
	I0420 01:25:51.829430  142057 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:25:51.829627  142057 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:25:51.829687  142057 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:25:51.829697  142057 certs.go:256] generating profile certs ...
	I0420 01:25:51.829823  142057 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/client.key
	I0420 01:25:52.088423  142057 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.key.c1e63643
	I0420 01:25:52.088542  142057 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.key
	I0420 01:25:52.088748  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:25:52.088811  142057 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:25:52.088841  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:25:52.088880  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:25:52.088919  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:25:52.088959  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:25:52.089020  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:52.090046  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:25:52.130739  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:25:52.163426  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:25:52.202470  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:25:52.232070  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0420 01:25:52.265640  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:25:52.305670  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:25:52.336788  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:25:52.371507  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:25:52.403015  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:25:52.433761  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:25:52.461373  142057 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:25:52.480675  142057 ssh_runner.go:195] Run: openssl version
	I0420 01:25:52.486965  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:25:52.499466  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.506355  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.506409  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.514625  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:25:52.530107  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:25:52.544051  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.549426  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.549495  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.555960  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:25:52.569332  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:25:52.583057  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.588323  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.588390  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.594622  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:25:52.607021  142057 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:25:52.612270  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:25:52.619182  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:25:52.626168  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:25:52.633276  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:25:52.639840  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:25:52.646478  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:25:52.652982  142057 kubeadm.go:391] StartCluster: {Name:embed-certs-269507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:25:52.653130  142057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:25:52.653182  142057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:52.699113  142057 cri.go:89] found id: ""
	I0420 01:25:52.699200  142057 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:25:52.712835  142057 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:25:52.712859  142057 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:25:52.712867  142057 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:25:52.712914  142057 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:25:52.726130  142057 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:25:52.727354  142057 kubeconfig.go:125] found "embed-certs-269507" server: "https://192.168.50.184:8443"
	I0420 01:25:52.729600  142057 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:25:52.744185  142057 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.184
	I0420 01:25:52.744217  142057 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:25:52.744231  142057 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:25:52.744292  142057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:52.792889  142057 cri.go:89] found id: ""
	I0420 01:25:52.792967  142057 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:25:52.812771  142057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:25:52.824478  142057 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:25:52.824495  142057 kubeadm.go:156] found existing configuration files:
	
	I0420 01:25:52.824533  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:25:52.835612  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:25:52.835679  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:25:52.847089  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:25:52.858049  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:25:52.858126  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:25:52.872787  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:25:52.886588  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:25:52.886649  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:25:52.899467  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:25:52.910884  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:25:52.910942  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:25:52.922217  142057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:25:52.933432  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:53.108167  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.044709  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.257949  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.327450  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.426738  142057 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:25:54.426849  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:54.926955  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:55.427198  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:55.489075  142057 api_server.go:72] duration metric: took 1.06233038s to wait for apiserver process to appear ...
	I0420 01:25:55.489109  142057 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:25:55.489137  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:55.489682  142057 api_server.go:269] stopped: https://192.168.50.184:8443/healthz: Get "https://192.168.50.184:8443/healthz": dial tcp 192.168.50.184:8443: connect: connection refused
	I0420 01:25:55.989278  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:54.836137  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:54.836639  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:54.836670  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:54.836584  143434 retry.go:31] will retry after 2.172259495s: waiting for machine to come up
	I0420 01:25:57.011412  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:57.011810  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:57.011840  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:57.011782  143434 retry.go:31] will retry after 2.279304552s: waiting for machine to come up
	I0420 01:25:55.684205  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:57.686312  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:58.334562  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:58.334594  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:58.334614  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.344779  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:58.344814  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:58.490111  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.499158  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:58.499194  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:58.989417  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.996443  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:58.996477  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:59.489585  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:59.496235  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:59.496271  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:59.989892  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:59.994154  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0420 01:26:00.000276  142057 api_server.go:141] control plane version: v1.30.0
	I0420 01:26:00.000301  142057 api_server.go:131] duration metric: took 4.511183577s to wait for apiserver health ...
	I0420 01:26:00.000311  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:26:00.000317  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:00.002217  142057 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:26:00.003646  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:26:00.018114  142057 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:26:00.040866  142057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:26:00.050481  142057 system_pods.go:59] 8 kube-system pods found
	I0420 01:26:00.050514  142057 system_pods.go:61] "coredns-7db6d8ff4d-79bzc" [af5f0029-75b5-4131-8c60-5a4fee48c618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:26:00.050524  142057 system_pods.go:61] "etcd-embed-certs-269507" [d6dfc301-0cfb-4bfb-99f7-948b77b38f53] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:26:00.050533  142057 system_pods.go:61] "kube-apiserver-embed-certs-269507" [915deee2-f571-4337-bcdc-07f40d06b9c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:26:00.050539  142057 system_pods.go:61] "kube-controller-manager-embed-certs-269507" [21c885b0-6d1b-4593-87f3-141e512af7dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:26:00.050545  142057 system_pods.go:61] "kube-proxy-crzk6" [d5972e9a-15cd-4b62-90d5-c10bdfa20989] Running
	I0420 01:26:00.050553  142057 system_pods.go:61] "kube-scheduler-embed-certs-269507" [1e556102-d4c9-494c-baf2-ab7e62d7d1e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0420 01:26:00.050559  142057 system_pods.go:61] "metrics-server-569cc877fc-8s79l" [1dc06e4a-3f47-4ef1-8757-81262c52fe55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:26:00.050583  142057 system_pods.go:61] "storage-provisioner" [f7b03907-0042-48d8-981b-1b8e665d58e7] Running
	I0420 01:26:00.050600  142057 system_pods.go:74] duration metric: took 9.699819ms to wait for pod list to return data ...
	I0420 01:26:00.050608  142057 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:26:00.053915  142057 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:26:00.053964  142057 node_conditions.go:123] node cpu capacity is 2
	I0420 01:26:00.053975  142057 node_conditions.go:105] duration metric: took 3.363162ms to run NodePressure ...
	I0420 01:26:00.053994  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:00.327736  142057 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:26:00.332409  142057 kubeadm.go:733] kubelet initialised
	I0420 01:26:00.332434  142057 kubeadm.go:734] duration metric: took 4.671334ms waiting for restarted kubelet to initialise ...
	I0420 01:26:00.332446  142057 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:26:00.338296  142057 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:59.292382  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:59.292905  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:59.292939  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:59.292852  143434 retry.go:31] will retry after 4.056028382s: waiting for machine to come up
	I0420 01:26:03.350591  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:03.351022  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:26:03.351047  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:26:03.350978  143434 retry.go:31] will retry after 5.38819739s: waiting for machine to come up
	I0420 01:26:00.184338  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:02.684685  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:02.345607  142057 pod_ready.go:102] pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:03.850887  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:03.850915  142057 pod_ready.go:81] duration metric: took 3.512592061s for pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:03.850929  142057 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:05.857665  142057 pod_ready.go:102] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:05.183082  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:07.682906  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:10.191165  141746 start.go:364] duration metric: took 1m1.9514957s to acquireMachinesLock for "no-preload-338118"
	I0420 01:26:10.191222  141746 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:26:10.191235  141746 fix.go:54] fixHost starting: 
	I0420 01:26:10.191624  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:26:10.191668  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:26:10.212169  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34829
	I0420 01:26:10.212568  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:26:10.213074  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:26:10.213120  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:26:10.213524  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:26:10.213755  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:10.213957  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:26:10.215578  141746 fix.go:112] recreateIfNeeded on no-preload-338118: state=Stopped err=<nil>
	I0420 01:26:10.215604  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	W0420 01:26:10.215788  141746 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:26:10.217632  141746 out.go:177] * Restarting existing kvm2 VM for "no-preload-338118" ...
	I0420 01:26:10.218915  141746 main.go:141] libmachine: (no-preload-338118) Calling .Start
	I0420 01:26:10.219094  141746 main.go:141] libmachine: (no-preload-338118) Ensuring networks are active...
	I0420 01:26:10.219820  141746 main.go:141] libmachine: (no-preload-338118) Ensuring network default is active
	I0420 01:26:10.220181  141746 main.go:141] libmachine: (no-preload-338118) Ensuring network mk-no-preload-338118 is active
	I0420 01:26:10.220584  141746 main.go:141] libmachine: (no-preload-338118) Getting domain xml...
	I0420 01:26:10.221275  141746 main.go:141] libmachine: (no-preload-338118) Creating domain...
	I0420 01:26:08.363522  142057 pod_ready.go:102] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:09.858701  142057 pod_ready.go:92] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:09.858731  142057 pod_ready.go:81] duration metric: took 6.007793209s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:09.858742  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:08.743367  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.743867  142411 main.go:141] libmachine: (old-k8s-version-564860) Found IP for machine: 192.168.61.91
	I0420 01:26:08.743896  142411 main.go:141] libmachine: (old-k8s-version-564860) Reserving static IP address...
	I0420 01:26:08.743914  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has current primary IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.744294  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "old-k8s-version-564860", mac: "52:54:00:9d:63:09", ip: "192.168.61.91"} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.744324  142411 main.go:141] libmachine: (old-k8s-version-564860) Reserved static IP address: 192.168.61.91
	I0420 01:26:08.744344  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | skip adding static IP to network mk-old-k8s-version-564860 - found existing host DHCP lease matching {name: "old-k8s-version-564860", mac: "52:54:00:9d:63:09", ip: "192.168.61.91"}
	I0420 01:26:08.744368  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Getting to WaitForSSH function...
	I0420 01:26:08.744387  142411 main.go:141] libmachine: (old-k8s-version-564860) Waiting for SSH to be available...
	I0420 01:26:08.746714  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.747119  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.747155  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.747278  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Using SSH client type: external
	I0420 01:26:08.747314  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa (-rw-------)
	I0420 01:26:08.747346  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:26:08.747359  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | About to run SSH command:
	I0420 01:26:08.747373  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | exit 0
	I0420 01:26:08.877633  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | SSH cmd err, output: <nil>: 
	I0420 01:26:08.878016  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetConfigRaw
	I0420 01:26:08.878715  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:08.881556  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.881982  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.882028  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.882326  142411 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/config.json ...
	I0420 01:26:08.882586  142411 machine.go:94] provisionDockerMachine start ...
	I0420 01:26:08.882613  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:08.882853  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:08.885133  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.885479  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.885510  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.885647  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:08.885843  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:08.886029  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:08.886192  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:08.886403  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:08.886642  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:08.886657  142411 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:26:09.006625  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:26:09.006655  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.006914  142411 buildroot.go:166] provisioning hostname "old-k8s-version-564860"
	I0420 01:26:09.006940  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.007144  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.010016  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.010349  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.010374  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.010597  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.010841  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.011040  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.011235  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.011439  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.011682  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.011718  142411 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-564860 && echo "old-k8s-version-564860" | sudo tee /etc/hostname
	I0420 01:26:09.155581  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-564860
	
	I0420 01:26:09.155612  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.158583  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.159021  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.159068  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.159285  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.159519  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.159747  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.159933  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.160128  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.160362  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.160390  142411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-564860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-564860/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-564860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:26:09.288804  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:26:09.288834  142411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:26:09.288856  142411 buildroot.go:174] setting up certificates
	I0420 01:26:09.288867  142411 provision.go:84] configureAuth start
	I0420 01:26:09.288877  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.289286  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:09.292454  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.292884  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.292923  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.293076  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.295234  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.295537  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.295565  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.295675  142411 provision.go:143] copyHostCerts
	I0420 01:26:09.295747  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:26:09.295758  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:26:09.295811  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:26:09.295936  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:26:09.295951  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:26:09.295981  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:26:09.296063  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:26:09.296075  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:26:09.296095  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:26:09.296154  142411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-564860 san=[127.0.0.1 192.168.61.91 localhost minikube old-k8s-version-564860]
	I0420 01:26:09.436313  142411 provision.go:177] copyRemoteCerts
	I0420 01:26:09.436373  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:26:09.436401  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.439316  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.439700  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.439743  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.439856  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.440057  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.440226  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.440360  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:09.529141  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:26:09.558376  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0420 01:26:09.586393  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:26:09.615274  142411 provision.go:87] duration metric: took 326.393984ms to configureAuth
	I0420 01:26:09.615300  142411 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:26:09.615501  142411 config.go:182] Loaded profile config "old-k8s-version-564860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:26:09.615590  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.618470  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.618905  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.618938  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.619141  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.619325  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.619505  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.619662  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.619862  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.620073  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.620091  142411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:26:09.924929  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:26:09.924958  142411 machine.go:97] duration metric: took 1.042352034s to provisionDockerMachine
	I0420 01:26:09.924973  142411 start.go:293] postStartSetup for "old-k8s-version-564860" (driver="kvm2")
	I0420 01:26:09.924985  142411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:26:09.925021  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:09.925441  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:26:09.925485  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.927985  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.928377  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.928407  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.928565  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.928770  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.928944  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.929114  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.020189  142411 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:26:10.025578  142411 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:26:10.025607  142411 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:26:10.025707  142411 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:26:10.025795  142411 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:26:10.025888  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:26:10.038138  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:10.065063  142411 start.go:296] duration metric: took 140.07164ms for postStartSetup
	I0420 01:26:10.065111  142411 fix.go:56] duration metric: took 24.94209431s for fixHost
	I0420 01:26:10.065139  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.068099  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.068493  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.068544  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.068697  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.068916  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.069114  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.069255  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.069455  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:10.069662  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:10.069678  142411 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:26:10.190955  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576370.174630368
	
	I0420 01:26:10.190984  142411 fix.go:216] guest clock: 1713576370.174630368
	I0420 01:26:10.190994  142411 fix.go:229] Guest: 2024-04-20 01:26:10.174630368 +0000 UTC Remote: 2024-04-20 01:26:10.065116719 +0000 UTC m=+276.709087933 (delta=109.513649ms)
	I0420 01:26:10.191036  142411 fix.go:200] guest clock delta is within tolerance: 109.513649ms
	I0420 01:26:10.191044  142411 start.go:83] releasing machines lock for "old-k8s-version-564860", held for 25.068071712s
	I0420 01:26:10.191074  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.191368  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:10.194872  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.195333  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.195365  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.195510  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196060  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196253  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196331  142411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:26:10.196375  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.196439  142411 ssh_runner.go:195] Run: cat /version.json
	I0420 01:26:10.196467  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.199156  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199522  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199557  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.199572  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199760  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.199975  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.200098  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.200137  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.200165  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.200326  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.200700  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.200857  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.200992  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.201150  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.283430  142411 ssh_runner.go:195] Run: systemctl --version
	I0420 01:26:10.310703  142411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:26:10.462457  142411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:26:10.470897  142411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:26:10.470993  142411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:26:10.489867  142411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:26:10.489899  142411 start.go:494] detecting cgroup driver to use...
	I0420 01:26:10.489996  142411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:26:10.512741  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:26:10.530013  142411 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:26:10.530077  142411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:26:10.548567  142411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:26:10.565645  142411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:26:10.693390  142411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:26:10.878889  142411 docker.go:233] disabling docker service ...
	I0420 01:26:10.878973  142411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:26:10.901233  142411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:26:10.915219  142411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:26:11.053815  142411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:26:11.201766  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:26:11.218569  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:26:11.240543  142411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0420 01:26:11.240604  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.253384  142411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:26:11.253460  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.268703  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.281575  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.296477  142411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:26:11.312458  142411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:26:11.328008  142411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:26:11.328076  142411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:26:11.349027  142411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:26:11.362064  142411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:11.500624  142411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:26:11.665985  142411 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:26:11.666061  142411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:26:11.672929  142411 start.go:562] Will wait 60s for crictl version
	I0420 01:26:11.673006  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:11.678398  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:26:11.727572  142411 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:26:11.727663  142411 ssh_runner.go:195] Run: crio --version
	I0420 01:26:11.760504  142411 ssh_runner.go:195] Run: crio --version
	I0420 01:26:11.803463  142411 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0420 01:26:11.804782  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:11.807755  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:11.808135  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:11.808177  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:11.808396  142411 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0420 01:26:11.813653  142411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:11.830618  142411 kubeadm.go:877] updating cluster {Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:26:11.830793  142411 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:26:11.830874  142411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:11.889149  142411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:26:11.889218  142411 ssh_runner.go:195] Run: which lz4
	I0420 01:26:11.894461  142411 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:26:11.900427  142411 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:26:11.900456  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0420 01:26:10.183110  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:12.184209  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:11.636722  141746 main.go:141] libmachine: (no-preload-338118) Waiting to get IP...
	I0420 01:26:11.637635  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:11.638048  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:11.638135  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:11.638011  143635 retry.go:31] will retry after 264.135122ms: waiting for machine to come up
	I0420 01:26:11.903486  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:11.904008  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:11.904053  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:11.903958  143635 retry.go:31] will retry after 367.952741ms: waiting for machine to come up
	I0420 01:26:12.273951  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:12.274547  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:12.274584  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:12.274491  143635 retry.go:31] will retry after 390.958735ms: waiting for machine to come up
	I0420 01:26:12.667348  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:12.667888  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:12.667915  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:12.667820  143635 retry.go:31] will retry after 554.212994ms: waiting for machine to come up
	I0420 01:26:13.223423  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:13.224158  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:13.224184  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:13.224058  143635 retry.go:31] will retry after 686.102207ms: waiting for machine to come up
	I0420 01:26:13.911430  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:13.912019  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:13.912042  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:13.911968  143635 retry.go:31] will retry after 875.263983ms: waiting for machine to come up
	I0420 01:26:14.788949  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:14.789431  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:14.789481  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:14.789392  143635 retry.go:31] will retry after 847.129796ms: waiting for machine to come up
	I0420 01:26:15.637863  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:15.638348  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:15.638379  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:15.638288  143635 retry.go:31] will retry after 1.162423805s: waiting for machine to come up
	I0420 01:26:11.866297  142057 pod_ready.go:102] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:13.868499  142057 pod_ready.go:102] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:14.867208  142057 pod_ready.go:92] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.867241  142057 pod_ready.go:81] duration metric: took 5.008488667s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.867254  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.875100  142057 pod_ready.go:92] pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.875119  142057 pod_ready.go:81] duration metric: took 7.856647ms for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.875131  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-crzk6" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.880630  142057 pod_ready.go:92] pod "kube-proxy-crzk6" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.880651  142057 pod_ready.go:81] duration metric: took 5.512379ms for pod "kube-proxy-crzk6" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.880661  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.885625  142057 pod_ready.go:92] pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.885645  142057 pod_ready.go:81] duration metric: took 4.976632ms for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.885656  142057 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.031960  142411 crio.go:462] duration metric: took 2.137532848s to copy over tarball
	I0420 01:26:14.032043  142411 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:26:17.581625  142411 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.549548059s)
	I0420 01:26:17.581660  142411 crio.go:469] duration metric: took 3.549666471s to extract the tarball
	I0420 01:26:17.581672  142411 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:26:17.633172  142411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:17.679514  142411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:26:17.679544  142411 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0420 01:26:17.679710  142411 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.679940  142411 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.680051  142411 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.680061  142411 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.680225  142411 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.680266  142411 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0420 01:26:17.680442  142411 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.680516  142411 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.682336  142411 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.682425  142411 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0420 01:26:17.682428  142411 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.682462  142411 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.682341  142411 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.682512  142411 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.682952  142411 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.682955  142411 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.846602  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.850673  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.866812  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.871983  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.876346  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0420 01:26:17.876745  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.881269  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.985788  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.997662  142411 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0420 01:26:17.997709  142411 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0420 01:26:17.997716  142411 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.997751  142411 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.997778  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:17.997797  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.071610  142411 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0420 01:26:18.071682  142411 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:18.071705  142411 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0420 01:26:18.071741  142411 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:18.071760  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.071793  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.085631  142411 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0420 01:26:18.085689  142411 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0420 01:26:18.085748  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.087239  142411 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0420 01:26:18.087288  142411 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:18.087362  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.094891  142411 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0420 01:26:18.094940  142411 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:18.094989  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.232524  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:18.232595  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:18.232613  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0420 01:26:18.232649  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0420 01:26:18.232595  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:18.232682  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:18.232710  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:14.684499  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:17.185481  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:16.802494  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:16.802977  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:16.803009  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:16.802908  143635 retry.go:31] will retry after 1.370900633s: waiting for machine to come up
	I0420 01:26:18.175474  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:18.175996  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:18.176022  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:18.175943  143635 retry.go:31] will retry after 1.698879408s: waiting for machine to come up
	I0420 01:26:19.876437  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:19.876901  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:19.876932  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:19.876843  143635 retry.go:31] will retry after 2.622833508s: waiting for machine to come up
	I0420 01:26:16.894119  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:18.894941  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:18.408724  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0420 01:26:18.408791  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0420 01:26:18.410041  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0420 01:26:18.410136  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0420 01:26:18.424042  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0420 01:26:18.428203  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0420 01:26:18.428295  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0420 01:26:18.450170  142411 cache_images.go:92] duration metric: took 770.600266ms to LoadCachedImages
	W0420 01:26:18.450288  142411 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0420 01:26:18.450305  142411 kubeadm.go:928] updating node { 192.168.61.91 8443 v1.20.0 crio true true} ...
	I0420 01:26:18.450428  142411 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-564860 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:26:18.450522  142411 ssh_runner.go:195] Run: crio config
	I0420 01:26:18.503362  142411 cni.go:84] Creating CNI manager for ""
	I0420 01:26:18.503407  142411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:18.503427  142411 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:26:18.503463  142411 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-564860 NodeName:old-k8s-version-564860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0420 01:26:18.503671  142411 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-564860"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:26:18.503745  142411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0420 01:26:18.516393  142411 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:26:18.516475  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:26:18.529038  142411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0420 01:26:18.550442  142411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:26:18.572012  142411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0420 01:26:18.595682  142411 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I0420 01:26:18.602036  142411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:18.622226  142411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:18.774466  142411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:26:18.795074  142411 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860 for IP: 192.168.61.91
	I0420 01:26:18.795104  142411 certs.go:194] generating shared ca certs ...
	I0420 01:26:18.795125  142411 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:18.795301  142411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:26:18.795342  142411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:26:18.795352  142411 certs.go:256] generating profile certs ...
	I0420 01:26:18.795433  142411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/client.key
	I0420 01:26:18.795487  142411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key.d235183f
	I0420 01:26:18.795524  142411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key
	I0420 01:26:18.795645  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:26:18.795675  142411 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:26:18.795685  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:26:18.795706  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:26:18.795735  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:26:18.795765  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:26:18.795828  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:18.796607  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:26:18.845581  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:26:18.891065  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:26:18.933536  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:26:18.977381  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0420 01:26:19.009816  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:26:19.042053  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:26:19.090614  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:26:19.119554  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:26:19.147545  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:26:19.177775  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:26:19.211008  142411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:26:19.234399  142411 ssh_runner.go:195] Run: openssl version
	I0420 01:26:19.242808  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:26:19.256132  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.261681  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.261739  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.270546  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:26:19.284112  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:26:19.296998  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.302497  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.302551  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.310883  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:26:19.325130  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:26:19.338964  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.344915  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.344986  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.351926  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:26:19.366428  142411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:26:19.372391  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:26:19.379606  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:26:19.386698  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:26:19.395102  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:26:19.401981  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:26:19.409477  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:26:19.416444  142411 kubeadm.go:391] StartCluster: {Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:26:19.416557  142411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:26:19.416600  142411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:19.460782  142411 cri.go:89] found id: ""
	I0420 01:26:19.460884  142411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:26:19.473812  142411 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:26:19.473832  142411 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:26:19.473838  142411 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:26:19.473899  142411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:26:19.486686  142411 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:26:19.487757  142411 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-564860" does not appear in /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:26:19.488411  142411 kubeconfig.go:62] /home/jenkins/minikube-integration/18703-76456/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-564860" cluster setting kubeconfig missing "old-k8s-version-564860" context setting]
	I0420 01:26:19.489438  142411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:19.491237  142411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:26:19.503483  142411 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.91
	I0420 01:26:19.503519  142411 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:26:19.503530  142411 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:26:19.503597  142411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:19.546350  142411 cri.go:89] found id: ""
	I0420 01:26:19.546438  142411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:26:19.568177  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:26:19.580545  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:26:19.580573  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:26:19.580658  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:26:19.592945  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:26:19.593010  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:26:19.605598  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:26:19.617261  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:26:19.617346  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:26:19.629242  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:26:19.640143  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:26:19.640211  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:26:19.654226  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:26:19.666207  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:26:19.666275  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:26:19.678899  142411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:26:19.694374  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:19.845435  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:20.619142  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:20.891265  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:21.020834  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:21.124545  142411 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:26:21.124652  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:21.625462  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:22.125171  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:22.625565  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:23.125077  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:19.685129  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:22.183561  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:22.502227  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:22.502665  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:22.502696  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:22.502603  143635 retry.go:31] will retry after 3.3877716s: waiting for machine to come up
	I0420 01:26:21.392042  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:23.392579  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:25.394230  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:23.625392  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.125446  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.625035  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:25.125592  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:25.624718  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:26.124803  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:26.625420  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:27.125162  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:27.625475  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:28.125637  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.685014  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:27.182545  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:25.891769  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:25.892321  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:25.892353  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:25.892252  143635 retry.go:31] will retry after 3.395760477s: waiting for machine to come up
	I0420 01:26:29.290361  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:29.290858  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:29.290907  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:29.290791  143635 retry.go:31] will retry after 4.86761736s: waiting for machine to come up
	I0420 01:26:27.892903  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:30.392680  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:28.625781  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.125145  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.625647  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:30.125081  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:30.625404  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:31.124753  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:31.625565  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:32.124750  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:32.624841  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:33.125120  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.682707  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:31.682790  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:33.683549  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:34.162306  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.162883  141746 main.go:141] libmachine: (no-preload-338118) Found IP for machine: 192.168.72.89
	I0420 01:26:34.162912  141746 main.go:141] libmachine: (no-preload-338118) Reserving static IP address...
	I0420 01:26:34.162928  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has current primary IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.163266  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "no-preload-338118", mac: "52:54:00:14:65:26", ip: "192.168.72.89"} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.163296  141746 main.go:141] libmachine: (no-preload-338118) Reserved static IP address: 192.168.72.89
	I0420 01:26:34.163316  141746 main.go:141] libmachine: (no-preload-338118) DBG | skip adding static IP to network mk-no-preload-338118 - found existing host DHCP lease matching {name: "no-preload-338118", mac: "52:54:00:14:65:26", ip: "192.168.72.89"}
	I0420 01:26:34.163335  141746 main.go:141] libmachine: (no-preload-338118) DBG | Getting to WaitForSSH function...
	I0420 01:26:34.163350  141746 main.go:141] libmachine: (no-preload-338118) Waiting for SSH to be available...
	I0420 01:26:34.165641  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.165947  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.165967  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.166136  141746 main.go:141] libmachine: (no-preload-338118) DBG | Using SSH client type: external
	I0420 01:26:34.166161  141746 main.go:141] libmachine: (no-preload-338118) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa (-rw-------)
	I0420 01:26:34.166190  141746 main.go:141] libmachine: (no-preload-338118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:26:34.166216  141746 main.go:141] libmachine: (no-preload-338118) DBG | About to run SSH command:
	I0420 01:26:34.166232  141746 main.go:141] libmachine: (no-preload-338118) DBG | exit 0
	I0420 01:26:34.293435  141746 main.go:141] libmachine: (no-preload-338118) DBG | SSH cmd err, output: <nil>: 
	I0420 01:26:34.293789  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetConfigRaw
	I0420 01:26:34.294381  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:34.296958  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.297355  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.297391  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.297670  141746 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/config.json ...
	I0420 01:26:34.297915  141746 machine.go:94] provisionDockerMachine start ...
	I0420 01:26:34.297945  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:34.298191  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.300645  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.301042  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.301068  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.301280  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.301496  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.301719  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.301895  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.302104  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.302272  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.302284  141746 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:26:34.419082  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:26:34.419113  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.419424  141746 buildroot.go:166] provisioning hostname "no-preload-338118"
	I0420 01:26:34.419452  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.419715  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.422630  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.423010  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.423052  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.423212  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.423415  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.423599  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.423716  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.423928  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.424135  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.424149  141746 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-338118 && echo "no-preload-338118" | sudo tee /etc/hostname
	I0420 01:26:34.555223  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-338118
	
	I0420 01:26:34.555254  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.558217  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.558606  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.558643  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.558792  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.558999  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.559241  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.559423  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.559655  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.559827  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.559844  141746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-338118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-338118/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-338118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:26:34.684192  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:26:34.684226  141746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:26:34.684261  141746 buildroot.go:174] setting up certificates
	I0420 01:26:34.684270  141746 provision.go:84] configureAuth start
	I0420 01:26:34.684289  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.684581  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:34.687363  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.687703  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.687733  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.687876  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.690220  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.690542  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.690569  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.690739  141746 provision.go:143] copyHostCerts
	I0420 01:26:34.690806  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:26:34.690817  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:26:34.690869  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:26:34.691006  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:26:34.691017  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:26:34.691038  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:26:34.691103  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:26:34.691111  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:26:34.691130  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:26:34.691178  141746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.no-preload-338118 san=[127.0.0.1 192.168.72.89 localhost minikube no-preload-338118]
	I0420 01:26:34.899595  141746 provision.go:177] copyRemoteCerts
	I0420 01:26:34.899652  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:26:34.899676  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.902298  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.902745  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.902777  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.902956  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.903150  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.903309  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.903457  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:34.993263  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:26:35.024837  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0420 01:26:35.054254  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 01:26:35.082455  141746 provision.go:87] duration metric: took 398.171071ms to configureAuth
	I0420 01:26:35.082488  141746 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:26:35.082741  141746 config.go:182] Loaded profile config "no-preload-338118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:26:35.082822  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.085868  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.086264  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.086313  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.086481  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.086708  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.086868  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.087051  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.087254  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:35.087424  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:35.087440  141746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:26:35.374277  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:26:35.374305  141746 machine.go:97] duration metric: took 1.076369907s to provisionDockerMachine
	I0420 01:26:35.374327  141746 start.go:293] postStartSetup for "no-preload-338118" (driver="kvm2")
	I0420 01:26:35.374342  141746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:26:35.374366  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.374733  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:26:35.374787  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.378647  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.378998  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.379038  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.379149  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.379353  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.379518  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.379694  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:35.468711  141746 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:26:35.473783  141746 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:26:35.473808  141746 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:26:35.473929  141746 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:26:35.474088  141746 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:26:35.474217  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:26:35.484161  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:35.511695  141746 start.go:296] duration metric: took 137.354669ms for postStartSetup
	I0420 01:26:35.511751  141746 fix.go:56] duration metric: took 25.320502022s for fixHost
	I0420 01:26:35.511780  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.514635  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.515042  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.515067  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.515247  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.515448  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.515663  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.515814  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.515988  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:35.516218  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:35.516240  141746 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:26:35.632029  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576395.615634246
	
	I0420 01:26:35.632057  141746 fix.go:216] guest clock: 1713576395.615634246
	I0420 01:26:35.632067  141746 fix.go:229] Guest: 2024-04-20 01:26:35.615634246 +0000 UTC Remote: 2024-04-20 01:26:35.511757232 +0000 UTC m=+369.861721674 (delta=103.877014ms)
	I0420 01:26:35.632113  141746 fix.go:200] guest clock delta is within tolerance: 103.877014ms
	I0420 01:26:35.632137  141746 start.go:83] releasing machines lock for "no-preload-338118", held for 25.440933699s
	I0420 01:26:35.632168  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.632486  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:35.635888  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.636400  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.636440  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.636751  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637250  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637448  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637547  141746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:26:35.637597  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.637694  141746 ssh_runner.go:195] Run: cat /version.json
	I0420 01:26:35.637720  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.640562  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.640800  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.640953  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.640969  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.641244  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.641389  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.641433  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.641486  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.641644  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.641670  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.641806  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:35.641873  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.641997  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.642163  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:32.892859  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:34.893134  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:35.749528  141746 ssh_runner.go:195] Run: systemctl --version
	I0420 01:26:35.756960  141746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:26:35.912075  141746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:26:35.920264  141746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:26:35.920355  141746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:26:35.937729  141746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:26:35.937753  141746 start.go:494] detecting cgroup driver to use...
	I0420 01:26:35.937811  141746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:26:35.954425  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:26:35.970967  141746 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:26:35.971023  141746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:26:35.986186  141746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:26:36.000803  141746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:26:36.114673  141746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:26:36.273386  141746 docker.go:233] disabling docker service ...
	I0420 01:26:36.273472  141746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:26:36.290471  141746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:26:36.305722  141746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:26:36.459528  141746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:26:36.609105  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:26:36.627255  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:26:36.651459  141746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:26:36.651535  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.663171  141746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:26:36.663255  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.674706  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.686196  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.697909  141746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:26:36.709625  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.720746  141746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.740333  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.752898  141746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:26:36.764600  141746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:26:36.764653  141746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:26:36.780697  141746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:26:36.791440  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:36.936761  141746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:26:37.095374  141746 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:26:37.095475  141746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:26:37.101601  141746 start.go:562] Will wait 60s for crictl version
	I0420 01:26:37.101673  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.106191  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:26:37.152257  141746 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:26:37.152361  141746 ssh_runner.go:195] Run: crio --version
	I0420 01:26:37.187172  141746 ssh_runner.go:195] Run: crio --version
	I0420 01:26:37.225203  141746 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:26:33.625596  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:34.124972  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:34.624791  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:35.125630  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:35.624815  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.125677  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.625631  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:37.125592  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:37.624883  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:38.124924  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.183893  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:38.184381  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:37.226708  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:37.229679  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:37.230090  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:37.230131  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:37.230253  141746 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0420 01:26:37.234914  141746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:37.249029  141746 kubeadm.go:877] updating cluster {Name:no-preload-338118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:26:37.249155  141746 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:26:37.249208  141746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:37.287235  141746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:26:37.287270  141746 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0420 01:26:37.287341  141746 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.287379  141746 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.287387  141746 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.287363  141746 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.287414  141746 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.287378  141746 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.287399  141746 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.287365  141746 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0420 01:26:37.288833  141746 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.288849  141746 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.288863  141746 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.288922  141746 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.288933  141746 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.288831  141746 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.288957  141746 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0420 01:26:37.288985  141746 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.452705  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.462178  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.463495  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0420 01:26:37.469562  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.480726  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.501069  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.517291  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.533934  141746 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0420 01:26:37.533976  141746 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.534032  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.578341  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.602332  141746 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0420 01:26:37.602381  141746 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.602432  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.718979  141746 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0420 01:26:37.719028  141746 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0420 01:26:37.719065  141746 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0420 01:26:37.719093  141746 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.719100  141746 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0420 01:26:37.719126  141746 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.719153  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719220  141746 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0420 01:26:37.719256  141746 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.719067  141746 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.719155  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719306  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.719309  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719036  141746 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.719369  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719154  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.719297  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.733974  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.802462  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.802496  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.802544  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0420 01:26:37.802575  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0420 01:26:37.802637  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.802648  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:37.802648  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.802708  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.802725  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0420 01:26:37.802788  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:37.897150  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0420 01:26:37.897190  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0420 01:26:37.897259  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:37.897268  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0420 01:26:37.897278  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:37.897285  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.897295  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0420 01:26:37.897337  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.902046  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0420 01:26:37.902094  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0420 01:26:37.902151  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:37.902307  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0420 01:26:37.902399  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:37.914016  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0420 01:26:40.184815  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.287511777s)
	I0420 01:26:40.184859  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0420 01:26:40.184918  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.282742718s)
	I0420 01:26:40.184951  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.282534359s)
	I0420 01:26:40.184974  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0420 01:26:40.184981  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0420 01:26:40.185052  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.287690505s)
	I0420 01:26:40.185081  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0420 01:26:40.185113  141746 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:40.185175  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:37.392757  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:39.394094  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:38.624766  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:39.125330  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:39.624953  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.125409  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.625125  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:41.125460  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:41.625041  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:42.125103  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:42.624948  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:43.125237  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.186531  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:42.683524  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:42.252666  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.067465398s)
	I0420 01:26:42.252710  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0420 01:26:42.252735  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:42.252774  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:44.616564  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.363755421s)
	I0420 01:26:44.616614  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0420 01:26:44.616649  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:44.616713  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:41.394300  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:43.895493  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:43.625155  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:44.124986  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:44.624957  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.125834  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.625359  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:46.125706  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:46.625115  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:47.125204  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:47.625746  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:48.124803  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.183628  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:47.684002  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:46.894590  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.277850916s)
	I0420 01:26:46.894626  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0420 01:26:46.894655  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:46.894712  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:49.158327  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.263583483s)
	I0420 01:26:49.158370  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0420 01:26:49.158406  141746 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:49.158478  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:50.223297  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.06478687s)
	I0420 01:26:50.223344  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0420 01:26:50.223382  141746 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:50.223452  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:46.393020  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:48.394414  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:50.893840  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:48.624957  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:49.125441  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:49.625078  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.124787  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.624817  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:51.125211  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:51.625408  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:52.124903  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:52.624826  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:53.124728  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.183173  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:52.183563  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:54.187354  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.963876859s)
	I0420 01:26:54.187388  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0420 01:26:54.187416  141746 cache_images.go:123] Successfully loaded all cached images
	I0420 01:26:54.187426  141746 cache_images.go:92] duration metric: took 16.900140079s to LoadCachedImages
	I0420 01:26:54.187439  141746 kubeadm.go:928] updating node { 192.168.72.89 8443 v1.30.0 crio true true} ...
	I0420 01:26:54.187545  141746 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-338118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:26:54.187608  141746 ssh_runner.go:195] Run: crio config
	I0420 01:26:54.245888  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:26:54.245914  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:54.245928  141746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:26:54.245954  141746 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.89 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-338118 NodeName:no-preload-338118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:26:54.246153  141746 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-338118"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:26:54.246232  141746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:26:54.259262  141746 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:26:54.259360  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:26:54.270769  141746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0420 01:26:54.290436  141746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:26:54.311846  141746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0420 01:26:54.332517  141746 ssh_runner.go:195] Run: grep 192.168.72.89	control-plane.minikube.internal$ /etc/hosts
	I0420 01:26:54.336874  141746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:54.350084  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:54.466328  141746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:26:54.484511  141746 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118 for IP: 192.168.72.89
	I0420 01:26:54.484545  141746 certs.go:194] generating shared ca certs ...
	I0420 01:26:54.484609  141746 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:54.484846  141746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:26:54.484960  141746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:26:54.484996  141746 certs.go:256] generating profile certs ...
	I0420 01:26:54.485165  141746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/client.key
	I0420 01:26:54.485273  141746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.key.f8d917a4
	I0420 01:26:54.485353  141746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.key
	I0420 01:26:54.485543  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:26:54.485604  141746 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:26:54.485622  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:26:54.485667  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:26:54.485707  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:26:54.485741  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:26:54.485804  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:54.486486  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:26:54.539867  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:26:54.575443  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:26:54.609857  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:26:54.638338  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0420 01:26:54.672043  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:26:54.704197  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:26:54.733771  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0420 01:26:54.761911  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:26:54.789278  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:26:54.816890  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:26:54.845884  141746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:26:54.864508  141746 ssh_runner.go:195] Run: openssl version
	I0420 01:26:54.870717  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:26:54.883192  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.888532  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.888588  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.895258  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:26:54.907346  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:26:54.919360  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.924700  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.924773  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.931133  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:26:54.942845  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:26:54.954785  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.959769  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.959856  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.966061  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:26:54.978389  141746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:26:54.983591  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:26:54.990157  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:26:54.996977  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:26:55.004103  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:26:55.010928  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:26:55.018024  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:26:55.024639  141746 kubeadm.go:391] StartCluster: {Name:no-preload-338118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:26:55.024733  141746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:26:55.024784  141746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:55.073888  141746 cri.go:89] found id: ""
	I0420 01:26:55.073954  141746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:26:55.087179  141746 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:26:55.087199  141746 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:26:55.087208  141746 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:26:55.087255  141746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:26:55.098975  141746 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:26:55.100487  141746 kubeconfig.go:125] found "no-preload-338118" server: "https://192.168.72.89:8443"
	I0420 01:26:55.103557  141746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:26:55.114871  141746 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.89
	I0420 01:26:55.114900  141746 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:26:55.114914  141746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:26:55.114983  141746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:55.174863  141746 cri.go:89] found id: ""
	I0420 01:26:55.174969  141746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:26:55.192867  141746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:26:55.203842  141746 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:26:55.203866  141746 kubeadm.go:156] found existing configuration files:
	
	I0420 01:26:55.203919  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:26:55.214476  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:26:55.214534  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:26:55.224728  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:26:55.235353  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:26:55.235403  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:26:55.245905  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:26:55.256614  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:26:55.256678  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:26:55.266909  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:26:55.276249  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:26:55.276294  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:26:55.285758  141746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:26:55.295896  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:55.418331  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:53.394623  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:55.893492  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:53.625614  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.125487  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.625414  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:55.125150  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:55.624831  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:56.125438  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:56.625450  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.125591  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.625757  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:58.124963  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.186686  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:56.681991  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:58.682958  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:56.156484  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.376987  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.450655  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.517915  141746 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:26:56.518018  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.018277  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.518215  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.538017  141746 api_server.go:72] duration metric: took 1.020104679s to wait for apiserver process to appear ...
	I0420 01:26:57.538045  141746 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:26:57.538070  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:26:58.392944  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:00.892688  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:58.625549  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:59.125177  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:59.624704  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:00.125709  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:00.625346  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.124849  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.624947  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:02.125407  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:02.625704  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:03.125695  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.182564  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:03.183451  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:02.538442  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:02.538498  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:03.396891  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:05.896375  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:03.625423  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:04.124806  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:04.625232  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.124917  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.624983  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:06.124851  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:06.625029  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:07.125554  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:07.625163  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:08.125455  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.682216  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:07.683636  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:07.538926  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:07.538973  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:08.392765  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:10.392933  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:08.625100  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:09.125395  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:09.625454  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.125615  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.624892  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:11.125366  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:11.625074  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:12.125165  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:12.625629  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:13.124824  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.182884  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:12.683893  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:12.540046  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:12.540121  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:12.393561  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:14.893756  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:13.625040  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:14.125511  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:14.624890  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.125622  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.625393  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:16.125215  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:16.625561  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:17.125263  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:17.624772  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:18.125597  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.183734  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:17.683742  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:17.540652  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:17.540701  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.076616  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": read tcp 192.168.72.1:34174->192.168.72.89:8443: read: connection reset by peer
	I0420 01:27:18.076671  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.077186  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": dial tcp 192.168.72.89:8443: connect: connection refused
	I0420 01:27:18.538798  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.539454  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": dial tcp 192.168.72.89:8443: connect: connection refused
	I0420 01:27:19.039080  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:17.393196  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:19.395273  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:18.624948  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:19.124956  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:19.625579  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:20.124827  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:20.625212  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:21.125476  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:21.125553  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:21.174633  142411 cri.go:89] found id: ""
	I0420 01:27:21.174668  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.174679  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:21.174686  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:21.174767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:21.218230  142411 cri.go:89] found id: ""
	I0420 01:27:21.218263  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.218275  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:21.218284  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:21.218369  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:21.258886  142411 cri.go:89] found id: ""
	I0420 01:27:21.258916  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.258926  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:21.258932  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:21.259003  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:21.306725  142411 cri.go:89] found id: ""
	I0420 01:27:21.306758  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.306769  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:21.306777  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:21.306843  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:21.349049  142411 cri.go:89] found id: ""
	I0420 01:27:21.349086  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.349098  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:21.349106  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:21.349174  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:21.392312  142411 cri.go:89] found id: ""
	I0420 01:27:21.392338  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.392346  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:21.392352  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:21.392425  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:21.434121  142411 cri.go:89] found id: ""
	I0420 01:27:21.434148  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.434156  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:21.434162  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:21.434210  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:21.473728  142411 cri.go:89] found id: ""
	I0420 01:27:21.473754  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.473762  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:21.473772  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:21.473785  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:21.537607  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:21.537648  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:21.554563  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:21.554604  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:21.674778  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:21.674803  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:21.674829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:21.740625  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:21.740666  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:20.182461  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:22.682574  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:24.039641  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:24.039690  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:21.397381  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:23.893642  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:24.284890  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:24.301486  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:24.301571  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:24.340987  142411 cri.go:89] found id: ""
	I0420 01:27:24.341012  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.341021  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:24.341026  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:24.341102  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:24.379983  142411 cri.go:89] found id: ""
	I0420 01:27:24.380014  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.380024  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:24.380029  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:24.380113  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:24.438700  142411 cri.go:89] found id: ""
	I0420 01:27:24.438729  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.438739  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:24.438745  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:24.438795  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:24.487761  142411 cri.go:89] found id: ""
	I0420 01:27:24.487793  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.487802  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:24.487808  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:24.487870  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:24.529408  142411 cri.go:89] found id: ""
	I0420 01:27:24.529439  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.529448  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:24.529453  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:24.529523  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:24.572782  142411 cri.go:89] found id: ""
	I0420 01:27:24.572817  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.572831  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:24.572841  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:24.572910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:24.620651  142411 cri.go:89] found id: ""
	I0420 01:27:24.620684  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.620696  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:24.620704  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:24.620769  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:24.659481  142411 cri.go:89] found id: ""
	I0420 01:27:24.659513  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.659525  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:24.659537  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:24.659552  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:24.714483  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:24.714517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:24.730279  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:24.730316  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:24.804883  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:24.804909  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:24.804926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:24.879557  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:24.879602  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:27.431026  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:27.448112  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:27.448176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:27.494959  142411 cri.go:89] found id: ""
	I0420 01:27:27.494988  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.494999  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:27.495007  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:27.495075  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:27.532023  142411 cri.go:89] found id: ""
	I0420 01:27:27.532055  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.532066  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:27.532075  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:27.532151  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:27.578551  142411 cri.go:89] found id: ""
	I0420 01:27:27.578600  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.578613  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:27.578621  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:27.578692  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:27.618248  142411 cri.go:89] found id: ""
	I0420 01:27:27.618277  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.618288  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:27.618296  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:27.618363  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:27.655682  142411 cri.go:89] found id: ""
	I0420 01:27:27.655714  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.655723  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:27.655729  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:27.655787  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:27.696355  142411 cri.go:89] found id: ""
	I0420 01:27:27.696389  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.696400  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:27.696408  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:27.696478  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:27.735354  142411 cri.go:89] found id: ""
	I0420 01:27:27.735378  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.735396  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:27.735402  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:27.735460  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:27.775234  142411 cri.go:89] found id: ""
	I0420 01:27:27.775261  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.775269  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:27.775277  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:27.775294  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:27.789970  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:27.790005  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:27.873345  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:27.873371  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:27.873387  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:27.952309  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:27.952353  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:28.003746  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:28.003792  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:24.683122  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:27.182311  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:29.040691  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:29.040743  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:26.394161  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:28.893349  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:30.893785  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:30.555691  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:30.570962  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:30.571041  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:30.613185  142411 cri.go:89] found id: ""
	I0420 01:27:30.613218  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.613227  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:30.613233  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:30.613291  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:30.654494  142411 cri.go:89] found id: ""
	I0420 01:27:30.654520  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.654529  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:30.654535  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:30.654600  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:30.702605  142411 cri.go:89] found id: ""
	I0420 01:27:30.702634  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.702646  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:30.702653  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:30.702719  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:30.742072  142411 cri.go:89] found id: ""
	I0420 01:27:30.742104  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.742115  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:30.742123  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:30.742191  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:30.793199  142411 cri.go:89] found id: ""
	I0420 01:27:30.793232  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.793244  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:30.793252  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:30.793340  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:30.832978  142411 cri.go:89] found id: ""
	I0420 01:27:30.833019  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.833034  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:30.833044  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:30.833126  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:30.875606  142411 cri.go:89] found id: ""
	I0420 01:27:30.875641  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.875655  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:30.875662  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:30.875729  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:30.917288  142411 cri.go:89] found id: ""
	I0420 01:27:30.917335  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.917348  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:30.917360  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:30.917375  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:30.996446  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:30.996469  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:30.996485  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:31.080494  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:31.080543  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:31.141226  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:31.141260  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:31.212808  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:31.212845  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:29.182651  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:31.183179  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:33.682476  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:34.041737  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:34.041789  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:33.393756  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:35.395120  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:33.728927  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:33.745749  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:33.745835  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:33.788813  142411 cri.go:89] found id: ""
	I0420 01:27:33.788845  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.788859  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:33.788868  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:33.788936  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:33.834918  142411 cri.go:89] found id: ""
	I0420 01:27:33.834948  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.834957  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:33.834963  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:33.835026  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:33.873928  142411 cri.go:89] found id: ""
	I0420 01:27:33.873960  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.873972  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:33.873977  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:33.874027  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:33.921462  142411 cri.go:89] found id: ""
	I0420 01:27:33.921497  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.921510  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:33.921519  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:33.921606  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:33.962280  142411 cri.go:89] found id: ""
	I0420 01:27:33.962308  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.962320  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:33.962329  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:33.962390  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:34.002582  142411 cri.go:89] found id: ""
	I0420 01:27:34.002616  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.002627  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:34.002635  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:34.002707  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:34.047383  142411 cri.go:89] found id: ""
	I0420 01:27:34.047410  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.047421  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:34.047428  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:34.047489  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:34.088296  142411 cri.go:89] found id: ""
	I0420 01:27:34.088341  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.088352  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:34.088364  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:34.088381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:34.180338  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:34.180380  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:34.224386  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:34.224422  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:34.278451  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:34.278488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:34.294377  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:34.294409  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:34.377115  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:36.878000  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:36.896875  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:36.896953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:36.953915  142411 cri.go:89] found id: ""
	I0420 01:27:36.953954  142411 logs.go:276] 0 containers: []
	W0420 01:27:36.953968  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:36.953977  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:36.954056  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:36.998223  142411 cri.go:89] found id: ""
	I0420 01:27:36.998250  142411 logs.go:276] 0 containers: []
	W0420 01:27:36.998260  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:36.998268  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:36.998337  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:37.069299  142411 cri.go:89] found id: ""
	I0420 01:27:37.069346  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.069358  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:37.069366  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:37.069436  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:37.112068  142411 cri.go:89] found id: ""
	I0420 01:27:37.112100  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.112112  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:37.112119  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:37.112175  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:37.155883  142411 cri.go:89] found id: ""
	I0420 01:27:37.155913  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.155924  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:37.155933  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:37.156006  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:37.200979  142411 cri.go:89] found id: ""
	I0420 01:27:37.201007  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.201018  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:37.201026  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:37.201091  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:37.241639  142411 cri.go:89] found id: ""
	I0420 01:27:37.241667  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.241678  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:37.241686  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:37.241748  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:37.281845  142411 cri.go:89] found id: ""
	I0420 01:27:37.281883  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.281894  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:37.281907  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:37.281923  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:37.327428  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:37.327463  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:37.385213  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:37.385248  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:37.400158  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:37.400190  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:37.476662  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:37.476687  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:37.476700  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:37.090819  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:27:37.090858  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:27:37.090877  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:37.124020  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:37.124076  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:37.538389  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:37.550894  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:37.550930  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:38.038486  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:38.051983  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:38.052019  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:38.538297  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:38.544961  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 200:
	ok
	I0420 01:27:38.553038  141746 api_server.go:141] control plane version: v1.30.0
	I0420 01:27:38.553065  141746 api_server.go:131] duration metric: took 41.015012791s to wait for apiserver health ...
	I0420 01:27:38.553075  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:27:38.553081  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:27:38.554687  141746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:27:35.684396  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:38.183391  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:38.555934  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:27:38.575384  141746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:27:38.609934  141746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:27:38.637152  141746 system_pods.go:59] 8 kube-system pods found
	I0420 01:27:38.637184  141746 system_pods.go:61] "coredns-7db6d8ff4d-r2hs7" [981840a2-82cd-49e0-8d4f-fbaf05290668] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:27:38.637191  141746 system_pods.go:61] "etcd-no-preload-338118" [92fc0da4-63d3-4f34-a5a6-27b73e7e210d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:27:38.637198  141746 system_pods.go:61] "kube-apiserver-no-preload-338118" [9f7bd5df-f733-4944-9ad2-0c9f0ea4529b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:27:38.637206  141746 system_pods.go:61] "kube-controller-manager-no-preload-338118" [d7a0bd6a-2cd0-4b27-ae83-ae38c1a20c63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:27:38.637215  141746 system_pods.go:61] "kube-proxy-zgq86" [d379ae65-c579-47e4-b055-6512e74868a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0420 01:27:38.637219  141746 system_pods.go:61] "kube-scheduler-no-preload-338118" [99558213-289d-4682-ba8e-20175c815563] Running
	I0420 01:27:38.637225  141746 system_pods.go:61] "metrics-server-569cc877fc-lcbcz" [1d2b716a-555a-46aa-ae27-c40553c94288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:27:38.637229  141746 system_pods.go:61] "storage-provisioner" [a8316010-8689-42aa-9741-227bf55a16bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:27:38.637236  141746 system_pods.go:74] duration metric: took 27.280844ms to wait for pod list to return data ...
	I0420 01:27:38.637243  141746 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:27:38.640744  141746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:27:38.640774  141746 node_conditions.go:123] node cpu capacity is 2
	I0420 01:27:38.640791  141746 node_conditions.go:105] duration metric: took 3.542872ms to run NodePressure ...
	I0420 01:27:38.640813  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:27:38.979785  141746 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:27:38.987541  141746 kubeadm.go:733] kubelet initialised
	I0420 01:27:38.987570  141746 kubeadm.go:734] duration metric: took 7.752383ms waiting for restarted kubelet to initialise ...
	I0420 01:27:38.987582  141746 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:27:38.994929  141746 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:38.999872  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:38.999903  141746 pod_ready.go:81] duration metric: took 4.940439ms for pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:38.999915  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:38.999923  141746 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.004575  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "etcd-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.004595  141746 pod_ready.go:81] duration metric: took 4.662163ms for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.004603  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "etcd-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.004608  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.012365  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "kube-apiserver-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.012386  141746 pod_ready.go:81] duration metric: took 7.773001ms for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.012393  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "kube-apiserver-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.012400  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.019091  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.019125  141746 pod_ready.go:81] duration metric: took 6.70398ms for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.019137  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.019146  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zgq86" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:37.894228  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:39.899004  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:40.075888  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:40.091313  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:40.091389  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:40.134013  142411 cri.go:89] found id: ""
	I0420 01:27:40.134039  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.134048  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:40.134053  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:40.134136  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:40.182108  142411 cri.go:89] found id: ""
	I0420 01:27:40.182140  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.182151  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:40.182158  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:40.182222  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:40.225406  142411 cri.go:89] found id: ""
	I0420 01:27:40.225438  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.225447  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:40.225453  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:40.225539  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:40.267599  142411 cri.go:89] found id: ""
	I0420 01:27:40.267627  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.267636  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:40.267645  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:40.267790  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:40.309385  142411 cri.go:89] found id: ""
	I0420 01:27:40.309418  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.309439  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:40.309448  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:40.309525  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:40.351947  142411 cri.go:89] found id: ""
	I0420 01:27:40.351980  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.351993  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:40.352003  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:40.352079  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:40.395583  142411 cri.go:89] found id: ""
	I0420 01:27:40.395614  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.395623  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:40.395629  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:40.395692  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:40.441348  142411 cri.go:89] found id: ""
	I0420 01:27:40.441397  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.441412  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:40.441426  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:40.441445  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:40.498231  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:40.498268  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:40.514550  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:40.514578  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:40.593580  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:40.593614  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:40.593631  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:40.671736  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:40.671778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:43.224892  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:43.240876  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:43.240939  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:43.281583  142411 cri.go:89] found id: ""
	I0420 01:27:43.281621  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.281634  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:43.281643  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:43.281705  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:43.321079  142411 cri.go:89] found id: ""
	I0420 01:27:43.321115  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.321125  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:43.321132  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:43.321277  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:43.365827  142411 cri.go:89] found id: ""
	I0420 01:27:43.365855  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.365864  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:43.365870  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:43.365921  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:40.184872  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:42.683826  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:41.025729  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:43.025868  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:45.526436  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:42.393681  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:44.401124  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:43.404317  142411 cri.go:89] found id: ""
	I0420 01:27:43.404349  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.404361  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:43.404370  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:43.404443  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:43.449268  142411 cri.go:89] found id: ""
	I0420 01:27:43.449299  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.449323  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:43.449331  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:43.449408  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:43.487782  142411 cri.go:89] found id: ""
	I0420 01:27:43.487829  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.487837  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:43.487844  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:43.487909  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:43.526650  142411 cri.go:89] found id: ""
	I0420 01:27:43.526677  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.526688  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:43.526695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:43.526755  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:43.565288  142411 cri.go:89] found id: ""
	I0420 01:27:43.565328  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.565340  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:43.565352  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:43.565368  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:43.618013  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:43.618046  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:43.634064  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:43.634101  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:43.710633  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:43.710663  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:43.710679  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:43.796658  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:43.796709  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:46.352329  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:46.366848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:46.366935  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:46.413643  142411 cri.go:89] found id: ""
	I0420 01:27:46.413676  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.413687  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:46.413695  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:46.413762  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:46.457976  142411 cri.go:89] found id: ""
	I0420 01:27:46.458002  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.458011  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:46.458020  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:46.458086  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:46.500291  142411 cri.go:89] found id: ""
	I0420 01:27:46.500317  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.500328  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:46.500334  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:46.500398  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:46.541279  142411 cri.go:89] found id: ""
	I0420 01:27:46.541331  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.541343  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:46.541359  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:46.541442  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:46.585613  142411 cri.go:89] found id: ""
	I0420 01:27:46.585642  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.585654  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:46.585661  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:46.585726  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:46.634400  142411 cri.go:89] found id: ""
	I0420 01:27:46.634430  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.634441  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:46.634450  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:46.634534  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:46.676276  142411 cri.go:89] found id: ""
	I0420 01:27:46.676305  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.676313  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:46.676320  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:46.676380  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:46.719323  142411 cri.go:89] found id: ""
	I0420 01:27:46.719356  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.719369  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:46.719381  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:46.719398  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:46.799735  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:46.799765  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:46.799790  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:46.878323  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:46.878371  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:46.931870  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:46.931902  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:46.983217  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:46.983250  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:45.182485  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:47.183499  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:47.526708  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:50.034262  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:46.897249  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:49.393599  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:49.500147  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:49.517380  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:49.517461  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:49.561300  142411 cri.go:89] found id: ""
	I0420 01:27:49.561347  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.561358  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:49.561365  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:49.561432  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:49.604569  142411 cri.go:89] found id: ""
	I0420 01:27:49.604594  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.604608  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:49.604614  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:49.604664  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:49.644952  142411 cri.go:89] found id: ""
	I0420 01:27:49.644983  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.644999  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:49.645006  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:49.645071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:49.694719  142411 cri.go:89] found id: ""
	I0420 01:27:49.694749  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.694757  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:49.694764  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:49.694815  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:49.743821  142411 cri.go:89] found id: ""
	I0420 01:27:49.743849  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.743857  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:49.743865  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:49.743936  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:49.789125  142411 cri.go:89] found id: ""
	I0420 01:27:49.789152  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.789161  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:49.789167  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:49.789233  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:49.828794  142411 cri.go:89] found id: ""
	I0420 01:27:49.828829  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.828841  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:49.828848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:49.828913  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:49.873335  142411 cri.go:89] found id: ""
	I0420 01:27:49.873366  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.873375  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:49.873385  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:49.873397  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:49.930590  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:49.930632  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:49.946850  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:49.946889  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:50.039200  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:50.039220  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:50.039236  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:50.122067  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:50.122118  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:52.664342  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:52.682978  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:52.683061  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:52.733806  142411 cri.go:89] found id: ""
	I0420 01:27:52.733836  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.733848  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:52.733855  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:52.733921  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:52.785977  142411 cri.go:89] found id: ""
	I0420 01:27:52.786008  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.786020  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:52.786027  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:52.786092  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:52.826957  142411 cri.go:89] found id: ""
	I0420 01:27:52.826987  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.826995  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:52.827001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:52.827056  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:52.876208  142411 cri.go:89] found id: ""
	I0420 01:27:52.876251  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.876265  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:52.876276  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:52.876354  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:52.918629  142411 cri.go:89] found id: ""
	I0420 01:27:52.918666  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.918679  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:52.918687  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:52.918767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:52.967604  142411 cri.go:89] found id: ""
	I0420 01:27:52.967646  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.967655  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:52.967661  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:52.967729  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:53.010948  142411 cri.go:89] found id: ""
	I0420 01:27:53.010975  142411 logs.go:276] 0 containers: []
	W0420 01:27:53.010983  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:53.010988  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:53.011039  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:53.055569  142411 cri.go:89] found id: ""
	I0420 01:27:53.055594  142411 logs.go:276] 0 containers: []
	W0420 01:27:53.055611  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:53.055620  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:53.055633  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:53.071038  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:53.071067  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:53.151334  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:53.151364  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:53.151381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:53.238509  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:53.238553  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:53.284898  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:53.284945  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:49.183562  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.682524  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:53.684003  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.027739  141746 pod_ready.go:92] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"True"
	I0420 01:27:51.027773  141746 pod_ready.go:81] duration metric: took 12.008613872s for pod "kube-proxy-zgq86" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.027785  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.033100  141746 pod_ready.go:92] pod "kube-scheduler-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:27:51.033124  141746 pod_ready.go:81] duration metric: took 5.331694ms for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.033136  141746 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:53.041387  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:55.542345  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.896822  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:54.395015  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:55.843065  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:55.856928  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:55.857001  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:55.903058  142411 cri.go:89] found id: ""
	I0420 01:27:55.903092  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.903103  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:55.903111  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:55.903170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:55.944369  142411 cri.go:89] found id: ""
	I0420 01:27:55.944402  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.944414  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:55.944421  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:55.944474  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:55.983485  142411 cri.go:89] found id: ""
	I0420 01:27:55.983510  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.983517  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:55.983523  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:55.983571  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:56.021931  142411 cri.go:89] found id: ""
	I0420 01:27:56.021956  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.021964  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:56.021970  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:56.022019  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:56.066671  142411 cri.go:89] found id: ""
	I0420 01:27:56.066705  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.066717  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:56.066724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:56.066788  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:56.107724  142411 cri.go:89] found id: ""
	I0420 01:27:56.107783  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.107794  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:56.107800  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:56.107854  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:56.149201  142411 cri.go:89] found id: ""
	I0420 01:27:56.149234  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.149246  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:56.149255  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:56.149328  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:56.189580  142411 cri.go:89] found id: ""
	I0420 01:27:56.189621  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.189633  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:56.189645  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:56.189661  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:56.243425  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:56.243462  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:56.261043  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:56.261079  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:56.341944  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:56.341967  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:56.341980  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:56.423252  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:56.423294  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:55.684408  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.183545  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:57.542492  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:00.040617  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:56.892991  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.893124  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:00.893660  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.968894  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:58.984559  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:58.984648  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:59.021603  142411 cri.go:89] found id: ""
	I0420 01:27:59.021634  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.021655  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:59.021666  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:59.021756  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:59.061592  142411 cri.go:89] found id: ""
	I0420 01:27:59.061626  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.061642  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:59.061649  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:59.061701  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:59.101956  142411 cri.go:89] found id: ""
	I0420 01:27:59.101986  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.101996  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:59.102003  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:59.102072  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:59.141104  142411 cri.go:89] found id: ""
	I0420 01:27:59.141136  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.141145  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:59.141151  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:59.141221  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:59.188973  142411 cri.go:89] found id: ""
	I0420 01:27:59.189005  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.189014  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:59.189022  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:59.189107  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:59.232598  142411 cri.go:89] found id: ""
	I0420 01:27:59.232632  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.232641  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:59.232647  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:59.232704  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:59.272623  142411 cri.go:89] found id: ""
	I0420 01:27:59.272660  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.272669  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:59.272675  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:59.272739  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:59.309951  142411 cri.go:89] found id: ""
	I0420 01:27:59.309977  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.309984  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:59.309994  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:59.310005  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:59.366589  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:59.366626  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:59.382724  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:59.382756  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:59.461072  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:59.461102  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:59.461122  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:59.544736  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:59.544769  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:02.089118  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:02.105402  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:02.105483  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:02.144665  142411 cri.go:89] found id: ""
	I0420 01:28:02.144691  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.144700  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:02.144706  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:02.144759  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:02.187471  142411 cri.go:89] found id: ""
	I0420 01:28:02.187498  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.187508  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:02.187515  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:02.187576  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:02.229206  142411 cri.go:89] found id: ""
	I0420 01:28:02.229233  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.229241  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:02.229247  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:02.229335  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:02.279425  142411 cri.go:89] found id: ""
	I0420 01:28:02.279464  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.279478  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:02.279488  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:02.279577  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:02.323033  142411 cri.go:89] found id: ""
	I0420 01:28:02.323066  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.323082  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:02.323090  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:02.323155  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:02.360121  142411 cri.go:89] found id: ""
	I0420 01:28:02.360158  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.360170  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:02.360178  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:02.360244  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:02.398756  142411 cri.go:89] found id: ""
	I0420 01:28:02.398786  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.398797  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:02.398804  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:02.398867  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:02.437982  142411 cri.go:89] found id: ""
	I0420 01:28:02.438010  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.438018  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:02.438028  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:02.438041  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:02.489396  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:02.489434  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:02.506764  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:02.506796  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:02.591894  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:02.591915  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:02.591929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:02.675241  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:02.675281  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:00.683139  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:02.684787  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:02.540829  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.041823  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:03.393076  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.396351  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.224296  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:05.238522  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:05.238593  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:05.278495  142411 cri.go:89] found id: ""
	I0420 01:28:05.278529  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.278540  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:05.278549  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:05.278621  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:05.318096  142411 cri.go:89] found id: ""
	I0420 01:28:05.318122  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.318130  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:05.318136  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:05.318196  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:05.358607  142411 cri.go:89] found id: ""
	I0420 01:28:05.358636  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.358653  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:05.358658  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:05.358749  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:05.417163  142411 cri.go:89] found id: ""
	I0420 01:28:05.417199  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.417211  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:05.417218  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:05.417284  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:05.468566  142411 cri.go:89] found id: ""
	I0420 01:28:05.468599  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.468610  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:05.468619  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:05.468691  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:05.514005  142411 cri.go:89] found id: ""
	I0420 01:28:05.514037  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.514047  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:05.514055  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:05.514112  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:05.554972  142411 cri.go:89] found id: ""
	I0420 01:28:05.555001  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.555012  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:05.555020  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:05.555083  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:05.596736  142411 cri.go:89] found id: ""
	I0420 01:28:05.596764  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.596773  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:05.596787  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:05.596800  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:05.649680  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:05.649719  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:05.667583  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:05.667614  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:05.743886  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:05.743922  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:05.743939  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:05.827827  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:05.827863  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:08.384615  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:05.181917  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.182902  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.541045  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:09.542114  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.892610  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:10.392899  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:08.401190  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:08.403071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:08.445453  142411 cri.go:89] found id: ""
	I0420 01:28:08.445486  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.445497  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:08.445505  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:08.445573  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:08.487598  142411 cri.go:89] found id: ""
	I0420 01:28:08.487636  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.487649  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:08.487657  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:08.487727  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:08.531416  142411 cri.go:89] found id: ""
	I0420 01:28:08.531445  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.531457  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:08.531465  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:08.531526  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:08.574964  142411 cri.go:89] found id: ""
	I0420 01:28:08.575000  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.575012  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:08.575020  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:08.575075  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:08.612644  142411 cri.go:89] found id: ""
	I0420 01:28:08.612679  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.612688  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:08.612695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:08.612748  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:08.651775  142411 cri.go:89] found id: ""
	I0420 01:28:08.651800  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.651811  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:08.651817  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:08.651869  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:08.692869  142411 cri.go:89] found id: ""
	I0420 01:28:08.692894  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.692902  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:08.692908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:08.692957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:08.731765  142411 cri.go:89] found id: ""
	I0420 01:28:08.731794  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.731805  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:08.731817  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:08.731836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:08.747401  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:08.747445  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:08.831069  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:08.831091  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:08.831110  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:08.919053  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:08.919095  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:08.965814  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:08.965854  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:11.518303  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:11.535213  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:11.535294  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:11.577182  142411 cri.go:89] found id: ""
	I0420 01:28:11.577214  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.577223  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:11.577229  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:11.577289  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:11.615023  142411 cri.go:89] found id: ""
	I0420 01:28:11.615055  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.615064  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:11.615070  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:11.615138  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:11.654062  142411 cri.go:89] found id: ""
	I0420 01:28:11.654089  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.654097  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:11.654104  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:11.654170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:11.700846  142411 cri.go:89] found id: ""
	I0420 01:28:11.700875  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.700885  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:11.700892  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:11.700966  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:11.743061  142411 cri.go:89] found id: ""
	I0420 01:28:11.743089  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.743100  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:11.743109  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:11.743175  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:11.783651  142411 cri.go:89] found id: ""
	I0420 01:28:11.783687  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.783698  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:11.783706  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:11.783781  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:11.827099  142411 cri.go:89] found id: ""
	I0420 01:28:11.827130  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.827139  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:11.827144  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:11.827197  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:11.867476  142411 cri.go:89] found id: ""
	I0420 01:28:11.867510  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.867523  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:11.867535  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:11.867554  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:11.920211  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:11.920246  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:11.937632  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:11.937670  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:12.014917  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:12.014940  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:12.014955  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:12.096549  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:12.096586  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:09.684447  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.183063  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.041220  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:14.540620  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.893441  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:15.408953  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:14.653783  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:14.667893  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:14.667955  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:14.710098  142411 cri.go:89] found id: ""
	I0420 01:28:14.710153  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.710164  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:14.710172  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:14.710240  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:14.750891  142411 cri.go:89] found id: ""
	I0420 01:28:14.750920  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.750929  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:14.750939  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:14.751010  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:14.794062  142411 cri.go:89] found id: ""
	I0420 01:28:14.794103  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.794127  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:14.794135  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:14.794204  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:14.834333  142411 cri.go:89] found id: ""
	I0420 01:28:14.834363  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.834375  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:14.834383  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:14.834446  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:14.874114  142411 cri.go:89] found id: ""
	I0420 01:28:14.874148  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.874160  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:14.874168  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:14.874238  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:14.912685  142411 cri.go:89] found id: ""
	I0420 01:28:14.912711  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.912720  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:14.912726  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:14.912787  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:14.954050  142411 cri.go:89] found id: ""
	I0420 01:28:14.954076  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.954083  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:14.954089  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:14.954150  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:14.992310  142411 cri.go:89] found id: ""
	I0420 01:28:14.992348  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.992357  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:14.992365  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:14.992388  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:15.047471  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:15.047512  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:15.065800  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:15.065842  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:15.146009  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:15.146037  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:15.146058  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:15.232920  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:15.232962  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:17.781215  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:17.797404  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:17.797466  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:17.840532  142411 cri.go:89] found id: ""
	I0420 01:28:17.840564  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.840573  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:17.840579  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:17.840636  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:17.881562  142411 cri.go:89] found id: ""
	I0420 01:28:17.881588  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.881596  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:17.881602  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:17.881651  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:17.935068  142411 cri.go:89] found id: ""
	I0420 01:28:17.935098  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.935108  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:17.935115  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:17.935177  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:17.980745  142411 cri.go:89] found id: ""
	I0420 01:28:17.980782  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.980795  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:17.980804  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:17.980880  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:18.051120  142411 cri.go:89] found id: ""
	I0420 01:28:18.051153  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.051164  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:18.051171  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:18.051235  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:18.091741  142411 cri.go:89] found id: ""
	I0420 01:28:18.091776  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.091788  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:18.091796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:18.091864  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:18.133438  142411 cri.go:89] found id: ""
	I0420 01:28:18.133472  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.133482  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:18.133488  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:18.133560  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:18.174624  142411 cri.go:89] found id: ""
	I0420 01:28:18.174665  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.174679  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:18.174694  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:18.174713  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:18.228519  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:18.228563  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:18.246452  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:18.246487  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:18.322051  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:18.322074  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:18.322088  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:14.684817  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:17.182405  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:16.541139  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:19.041191  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:17.895052  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:19.895901  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:18.404873  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:18.404904  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:20.950553  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:20.965081  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:20.965139  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:21.007198  142411 cri.go:89] found id: ""
	I0420 01:28:21.007243  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.007255  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:21.007263  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:21.007330  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:21.050991  142411 cri.go:89] found id: ""
	I0420 01:28:21.051019  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.051028  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:21.051034  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:21.051104  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:21.091953  142411 cri.go:89] found id: ""
	I0420 01:28:21.091986  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.091995  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:21.092001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:21.092085  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:21.134134  142411 cri.go:89] found id: ""
	I0420 01:28:21.134164  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.134174  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:21.134181  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:21.134251  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:21.173698  142411 cri.go:89] found id: ""
	I0420 01:28:21.173724  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.173731  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:21.173737  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:21.173801  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:21.221327  142411 cri.go:89] found id: ""
	I0420 01:28:21.221354  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.221362  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:21.221369  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:21.221428  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:21.262752  142411 cri.go:89] found id: ""
	I0420 01:28:21.262780  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.262791  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:21.262798  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:21.262851  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:21.303497  142411 cri.go:89] found id: ""
	I0420 01:28:21.303524  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.303535  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:21.303547  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:21.303563  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:21.358231  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:21.358265  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:21.373723  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:21.373753  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:21.465016  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:21.465044  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:21.465061  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:21.552087  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:21.552117  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:19.683617  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:22.182720  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:21.540588  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.039211  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:22.393170  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.396378  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.099938  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:24.116967  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:24.117045  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:24.159458  142411 cri.go:89] found id: ""
	I0420 01:28:24.159491  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.159501  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:24.159508  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:24.159574  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:24.206028  142411 cri.go:89] found id: ""
	I0420 01:28:24.206054  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.206065  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:24.206072  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:24.206137  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:24.248047  142411 cri.go:89] found id: ""
	I0420 01:28:24.248088  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.248101  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:24.248109  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:24.248176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:24.287867  142411 cri.go:89] found id: ""
	I0420 01:28:24.287898  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.287909  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:24.287917  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:24.287995  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:24.329399  142411 cri.go:89] found id: ""
	I0420 01:28:24.329433  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.329444  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:24.329452  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:24.329519  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:24.367846  142411 cri.go:89] found id: ""
	I0420 01:28:24.367871  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.367882  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:24.367889  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:24.367960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:24.414245  142411 cri.go:89] found id: ""
	I0420 01:28:24.414272  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.414283  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:24.414291  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:24.414354  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:24.453268  142411 cri.go:89] found id: ""
	I0420 01:28:24.453302  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.453331  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:24.453344  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:24.453366  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:24.514501  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:24.514546  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:24.529551  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:24.529591  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:24.613734  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:24.613757  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:24.613775  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:24.693804  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:24.693843  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:27.238443  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:27.254172  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:27.254235  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:27.297048  142411 cri.go:89] found id: ""
	I0420 01:28:27.297101  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.297111  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:27.297119  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:27.297181  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:27.340145  142411 cri.go:89] found id: ""
	I0420 01:28:27.340171  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.340181  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:27.340189  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:27.340316  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:27.383047  142411 cri.go:89] found id: ""
	I0420 01:28:27.383077  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.383089  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:27.383096  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:27.383169  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:27.428088  142411 cri.go:89] found id: ""
	I0420 01:28:27.428122  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.428134  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:27.428142  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:27.428206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:27.468257  142411 cri.go:89] found id: ""
	I0420 01:28:27.468300  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.468310  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:27.468317  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:27.468389  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:27.508834  142411 cri.go:89] found id: ""
	I0420 01:28:27.508873  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.508885  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:27.508892  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:27.508953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:27.548853  142411 cri.go:89] found id: ""
	I0420 01:28:27.548893  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.548901  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:27.548908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:27.548956  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:27.587841  142411 cri.go:89] found id: ""
	I0420 01:28:27.587875  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.587886  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:27.587899  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:27.587917  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:27.667848  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:27.667888  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:27.714820  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:27.714856  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:27.766337  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:27.766381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:27.782585  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:27.782627  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:27.856172  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:24.184768  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.683097  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.040531  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:28.040802  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:30.542386  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.893091  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:29.393546  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:30.356809  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:30.372449  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:30.372529  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:30.422164  142411 cri.go:89] found id: ""
	I0420 01:28:30.422198  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.422209  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:30.422218  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:30.422283  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:30.460367  142411 cri.go:89] found id: ""
	I0420 01:28:30.460395  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.460404  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:30.460411  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:30.460498  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:30.508423  142411 cri.go:89] found id: ""
	I0420 01:28:30.508460  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.508471  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:30.508479  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:30.508546  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:30.553124  142411 cri.go:89] found id: ""
	I0420 01:28:30.553152  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.553161  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:30.553167  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:30.553225  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:30.601866  142411 cri.go:89] found id: ""
	I0420 01:28:30.601908  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.601919  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:30.601939  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:30.602014  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:30.645413  142411 cri.go:89] found id: ""
	I0420 01:28:30.645446  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.645457  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:30.645467  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:30.645539  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:30.690955  142411 cri.go:89] found id: ""
	I0420 01:28:30.690988  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.690997  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:30.691006  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:30.691077  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:30.732146  142411 cri.go:89] found id: ""
	I0420 01:28:30.732186  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.732197  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:30.732209  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:30.732228  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:30.786890  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:30.786928  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:30.802887  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:30.802920  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:30.884422  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:30.884447  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:30.884461  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:30.967504  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:30.967540  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:29.183645  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:31.683218  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.684335  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.044031  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:35.540100  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:31.897363  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:34.392658  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.515720  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:33.531895  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:33.531953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:33.574626  142411 cri.go:89] found id: ""
	I0420 01:28:33.574668  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.574682  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:33.574690  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:33.574757  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:33.620527  142411 cri.go:89] found id: ""
	I0420 01:28:33.620553  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.620562  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:33.620568  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:33.620630  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:33.659685  142411 cri.go:89] found id: ""
	I0420 01:28:33.659711  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.659719  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:33.659724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:33.659773  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:33.699390  142411 cri.go:89] found id: ""
	I0420 01:28:33.699414  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.699422  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:33.699427  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:33.699485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:33.743819  142411 cri.go:89] found id: ""
	I0420 01:28:33.743844  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.743852  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:33.743858  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:33.743907  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:33.788416  142411 cri.go:89] found id: ""
	I0420 01:28:33.788442  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.788450  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:33.788456  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:33.788514  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:33.834105  142411 cri.go:89] found id: ""
	I0420 01:28:33.834129  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.834138  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:33.834144  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:33.834206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:33.884118  142411 cri.go:89] found id: ""
	I0420 01:28:33.884152  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.884164  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:33.884176  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:33.884193  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:33.940493  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:33.940525  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:33.954800  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:33.954829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:34.030788  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:34.030812  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:34.030829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:34.119533  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:34.119574  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:36.667132  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:36.684253  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:36.684334  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:36.723598  142411 cri.go:89] found id: ""
	I0420 01:28:36.723629  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.723641  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:36.723649  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:36.723718  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:36.761563  142411 cri.go:89] found id: ""
	I0420 01:28:36.761594  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.761606  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:36.761614  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:36.761679  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:36.803553  142411 cri.go:89] found id: ""
	I0420 01:28:36.803590  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.803603  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:36.803611  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:36.803674  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:36.840368  142411 cri.go:89] found id: ""
	I0420 01:28:36.840407  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.840421  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:36.840430  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:36.840497  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:36.879689  142411 cri.go:89] found id: ""
	I0420 01:28:36.879724  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.879735  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:36.879743  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:36.879807  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:36.920757  142411 cri.go:89] found id: ""
	I0420 01:28:36.920785  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.920796  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:36.920809  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:36.920871  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:36.957522  142411 cri.go:89] found id: ""
	I0420 01:28:36.957548  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.957556  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:36.957562  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:36.957624  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:36.997358  142411 cri.go:89] found id: ""
	I0420 01:28:36.997390  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.997400  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:36.997409  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:36.997422  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:37.055063  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:37.055105  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:37.070691  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:37.070720  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:37.150114  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:37.150140  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:37.150152  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:37.228676  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:37.228711  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:36.182514  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.183398  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.040622  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:40.539486  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:36.395217  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.893457  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:40.894381  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:39.776620  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:39.792201  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:39.792268  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:39.831544  142411 cri.go:89] found id: ""
	I0420 01:28:39.831568  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.831576  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:39.831588  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:39.831652  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:39.869458  142411 cri.go:89] found id: ""
	I0420 01:28:39.869488  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.869496  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:39.869503  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:39.869564  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:39.911588  142411 cri.go:89] found id: ""
	I0420 01:28:39.911615  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.911626  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:39.911633  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:39.911703  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:39.952458  142411 cri.go:89] found id: ""
	I0420 01:28:39.952489  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.952505  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:39.952513  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:39.952580  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:39.992988  142411 cri.go:89] found id: ""
	I0420 01:28:39.993016  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.993023  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:39.993029  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:39.993117  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:40.038306  142411 cri.go:89] found id: ""
	I0420 01:28:40.038348  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.038359  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:40.038367  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:40.038432  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:40.082185  142411 cri.go:89] found id: ""
	I0420 01:28:40.082219  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.082230  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:40.082238  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:40.082332  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:40.120346  142411 cri.go:89] found id: ""
	I0420 01:28:40.120373  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.120382  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:40.120391  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:40.120405  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:40.173735  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:40.173769  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:40.191808  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:40.191844  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:40.271429  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:40.271456  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:40.271473  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:40.361519  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:40.361558  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:42.938354  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:42.953088  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:42.953167  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:42.992539  142411 cri.go:89] found id: ""
	I0420 01:28:42.992564  142411 logs.go:276] 0 containers: []
	W0420 01:28:42.992571  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:42.992577  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:42.992637  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:43.032017  142411 cri.go:89] found id: ""
	I0420 01:28:43.032059  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.032074  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:43.032082  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:43.032142  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:43.077229  142411 cri.go:89] found id: ""
	I0420 01:28:43.077258  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.077266  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:43.077272  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:43.077342  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:43.117107  142411 cri.go:89] found id: ""
	I0420 01:28:43.117128  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.117139  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:43.117145  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:43.117206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:43.156262  142411 cri.go:89] found id: ""
	I0420 01:28:43.156297  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.156310  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:43.156317  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:43.156384  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:43.195897  142411 cri.go:89] found id: ""
	I0420 01:28:43.195927  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.195935  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:43.195942  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:43.195990  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:43.230468  142411 cri.go:89] found id: ""
	I0420 01:28:43.230498  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.230513  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:43.230522  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:43.230586  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:43.271980  142411 cri.go:89] found id: ""
	I0420 01:28:43.272009  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.272023  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:43.272035  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:43.272050  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:43.331606  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:43.331641  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:43.348411  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:43.348437  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 01:28:40.682973  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:43.182655  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:42.540341  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:45.039729  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:43.393377  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:45.893276  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	W0420 01:28:43.428628  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:43.428654  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:43.428675  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:43.511471  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:43.511506  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:46.056166  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:46.071677  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:46.071744  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:46.110710  142411 cri.go:89] found id: ""
	I0420 01:28:46.110740  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.110753  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:46.110761  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:46.110825  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:46.170680  142411 cri.go:89] found id: ""
	I0420 01:28:46.170712  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.170724  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:46.170731  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:46.170794  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:46.216387  142411 cri.go:89] found id: ""
	I0420 01:28:46.216413  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.216421  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:46.216429  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:46.216485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:46.258641  142411 cri.go:89] found id: ""
	I0420 01:28:46.258674  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.258685  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:46.258694  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:46.258755  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:46.296359  142411 cri.go:89] found id: ""
	I0420 01:28:46.296395  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.296407  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:46.296416  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:46.296480  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:46.335194  142411 cri.go:89] found id: ""
	I0420 01:28:46.335223  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.335238  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:46.335247  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:46.335300  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:46.373748  142411 cri.go:89] found id: ""
	I0420 01:28:46.373777  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.373789  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:46.373796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:46.373860  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:46.416960  142411 cri.go:89] found id: ""
	I0420 01:28:46.416987  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.416995  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:46.417005  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:46.417017  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:46.497542  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:46.497582  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:46.548086  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:46.548136  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:46.607354  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:46.607390  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:46.624379  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:46.624415  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:46.707425  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:45.682511  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.682752  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.046102  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:49.540014  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.895805  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:50.393001  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:49.208459  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:49.223081  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:49.223146  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:49.258688  142411 cri.go:89] found id: ""
	I0420 01:28:49.258718  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.258728  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:49.258734  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:49.258791  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:49.296817  142411 cri.go:89] found id: ""
	I0420 01:28:49.296859  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.296870  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:49.296878  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:49.296941  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:49.337821  142411 cri.go:89] found id: ""
	I0420 01:28:49.337853  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.337863  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:49.337870  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:49.337940  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:49.381360  142411 cri.go:89] found id: ""
	I0420 01:28:49.381384  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.381392  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:49.381397  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:49.381463  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:49.420099  142411 cri.go:89] found id: ""
	I0420 01:28:49.420143  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.420154  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:49.420162  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:49.420223  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:49.459810  142411 cri.go:89] found id: ""
	I0420 01:28:49.459843  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.459850  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:49.459859  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:49.459911  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:49.499776  142411 cri.go:89] found id: ""
	I0420 01:28:49.499808  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.499820  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:49.499828  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:49.499894  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:49.536115  142411 cri.go:89] found id: ""
	I0420 01:28:49.536147  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.536158  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:49.536169  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:49.536190  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:49.594665  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:49.594701  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:49.611896  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:49.611929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:49.689667  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:49.689685  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:49.689697  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:49.769061  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:49.769106  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:52.319299  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:52.336861  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:52.336934  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:52.380690  142411 cri.go:89] found id: ""
	I0420 01:28:52.380717  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.380725  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:52.380731  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:52.380781  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:52.429798  142411 cri.go:89] found id: ""
	I0420 01:28:52.429831  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.429843  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:52.429851  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:52.429915  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:52.474087  142411 cri.go:89] found id: ""
	I0420 01:28:52.474120  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.474130  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:52.474139  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:52.474204  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:52.514739  142411 cri.go:89] found id: ""
	I0420 01:28:52.514776  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.514789  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:52.514796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:52.514852  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:52.562100  142411 cri.go:89] found id: ""
	I0420 01:28:52.562195  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.562228  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:52.562236  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:52.562324  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:52.623266  142411 cri.go:89] found id: ""
	I0420 01:28:52.623301  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.623313  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:52.623321  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:52.623386  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:52.667788  142411 cri.go:89] found id: ""
	I0420 01:28:52.667818  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.667828  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:52.667838  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:52.667902  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:52.724607  142411 cri.go:89] found id: ""
	I0420 01:28:52.724636  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.724645  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:52.724654  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:52.724666  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:52.774798  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:52.774836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:52.833949  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:52.833989  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:52.851757  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:52.851787  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:52.939092  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:52.939119  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:52.939136  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:49.684112  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:52.182596  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:51.540918  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:54.039528  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:52.393913  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:54.892043  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:55.525807  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:55.540481  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:55.540557  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:55.584415  142411 cri.go:89] found id: ""
	I0420 01:28:55.584447  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.584458  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:55.584466  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:55.584538  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:55.623920  142411 cri.go:89] found id: ""
	I0420 01:28:55.623955  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.623965  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:55.623973  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:55.624037  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:55.667768  142411 cri.go:89] found id: ""
	I0420 01:28:55.667802  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.667810  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:55.667816  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:55.667889  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:55.708466  142411 cri.go:89] found id: ""
	I0420 01:28:55.708502  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.708513  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:55.708520  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:55.708600  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:55.748797  142411 cri.go:89] found id: ""
	I0420 01:28:55.748838  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.748849  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:55.748857  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:55.748919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:55.791714  142411 cri.go:89] found id: ""
	I0420 01:28:55.791743  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.791752  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:55.791761  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:55.791832  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:55.833836  142411 cri.go:89] found id: ""
	I0420 01:28:55.833862  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.833872  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:55.833879  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:55.833942  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:55.877425  142411 cri.go:89] found id: ""
	I0420 01:28:55.877462  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.877472  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:55.877484  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:55.877501  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:55.933237  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:55.933280  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:55.949507  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:55.949534  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:56.025596  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:56.025624  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:56.025641  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:56.105403  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:56.105439  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:54.683664  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.684401  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.040380  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.040834  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:00.040878  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.893067  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.894882  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.653368  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:58.669367  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:58.669429  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:58.712457  142411 cri.go:89] found id: ""
	I0420 01:28:58.712490  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.712501  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:58.712508  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:58.712574  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:58.750246  142411 cri.go:89] found id: ""
	I0420 01:28:58.750273  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.750281  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:58.750287  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:58.750351  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:58.793486  142411 cri.go:89] found id: ""
	I0420 01:28:58.793514  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.793522  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:58.793529  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:58.793595  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:58.839413  142411 cri.go:89] found id: ""
	I0420 01:28:58.839448  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.839461  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:58.839469  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:58.839537  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:58.881385  142411 cri.go:89] found id: ""
	I0420 01:28:58.881418  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.881430  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:58.881438  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:58.881509  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:58.923900  142411 cri.go:89] found id: ""
	I0420 01:28:58.923945  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.923965  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:58.923975  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:58.924038  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:58.962795  142411 cri.go:89] found id: ""
	I0420 01:28:58.962836  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.962848  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:58.962856  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:58.962919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:59.006309  142411 cri.go:89] found id: ""
	I0420 01:28:59.006341  142411 logs.go:276] 0 containers: []
	W0420 01:28:59.006350  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:59.006360  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:59.006372  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:59.062778  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:59.062819  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:59.078600  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:59.078630  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:59.159340  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:59.159361  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:59.159376  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:59.247257  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:59.247307  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:01.792687  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:01.808507  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:01.808588  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:01.851642  142411 cri.go:89] found id: ""
	I0420 01:29:01.851680  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.851691  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:01.851699  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:01.851765  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:01.891516  142411 cri.go:89] found id: ""
	I0420 01:29:01.891549  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.891560  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:01.891568  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:01.891640  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:01.934353  142411 cri.go:89] found id: ""
	I0420 01:29:01.934390  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.934402  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:01.934410  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:01.934479  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:01.972552  142411 cri.go:89] found id: ""
	I0420 01:29:01.972587  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.972599  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:01.972607  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:01.972711  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:02.012316  142411 cri.go:89] found id: ""
	I0420 01:29:02.012348  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.012360  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:02.012368  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:02.012423  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:02.056951  142411 cri.go:89] found id: ""
	I0420 01:29:02.056984  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.056994  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:02.057001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:02.057164  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:02.104061  142411 cri.go:89] found id: ""
	I0420 01:29:02.104091  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.104102  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:02.104110  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:02.104163  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:02.144085  142411 cri.go:89] found id: ""
	I0420 01:29:02.144114  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.144125  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:02.144137  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:02.144160  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:02.216560  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:02.216585  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:02.216598  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:02.307178  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:02.307222  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:02.349769  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:02.349798  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:02.401141  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:02.401176  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:59.185384  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:01.684462  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:03.685188  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:02.041060  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:04.540616  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:01.393943  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:03.894095  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:04.917513  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:04.934187  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:04.934266  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:04.970258  142411 cri.go:89] found id: ""
	I0420 01:29:04.970289  142411 logs.go:276] 0 containers: []
	W0420 01:29:04.970298  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:04.970304  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:04.970359  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:05.012853  142411 cri.go:89] found id: ""
	I0420 01:29:05.012883  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.012893  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:05.012899  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:05.012960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:05.054793  142411 cri.go:89] found id: ""
	I0420 01:29:05.054822  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.054833  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:05.054842  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:05.054910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:05.094637  142411 cri.go:89] found id: ""
	I0420 01:29:05.094674  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.094684  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:05.094701  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:05.094770  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:05.134874  142411 cri.go:89] found id: ""
	I0420 01:29:05.134903  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.134912  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:05.134918  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:05.134973  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:05.175637  142411 cri.go:89] found id: ""
	I0420 01:29:05.175668  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.175679  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:05.175687  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:05.175752  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:05.217809  142411 cri.go:89] found id: ""
	I0420 01:29:05.217847  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.217860  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:05.217867  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:05.217933  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:05.266884  142411 cri.go:89] found id: ""
	I0420 01:29:05.266917  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.266930  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:05.266941  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:05.266958  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:05.323765  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:05.323818  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:05.338524  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:05.338553  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:05.419860  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:05.419889  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:05.419906  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:05.506268  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:05.506311  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:08.055690  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:08.072692  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:08.072758  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:08.116247  142411 cri.go:89] found id: ""
	I0420 01:29:08.116287  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.116296  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:08.116304  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:08.116369  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:08.163152  142411 cri.go:89] found id: ""
	I0420 01:29:08.163177  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.163185  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:08.163190  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:08.163246  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:08.207330  142411 cri.go:89] found id: ""
	I0420 01:29:08.207357  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.207365  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:08.207371  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:08.207422  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:08.249833  142411 cri.go:89] found id: ""
	I0420 01:29:08.249864  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.249873  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:08.249879  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:08.249941  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:08.290834  142411 cri.go:89] found id: ""
	I0420 01:29:08.290867  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.290876  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:08.290883  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:08.290957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:08.333767  142411 cri.go:89] found id: ""
	I0420 01:29:08.333799  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.333809  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:08.333816  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:08.333888  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:08.381431  142411 cri.go:89] found id: ""
	I0420 01:29:08.381459  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.381468  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:08.381474  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:08.381532  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:06.183719  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.184829  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:06.544179  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:09.039956  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:06.394434  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.893184  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:10.897462  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.423702  142411 cri.go:89] found id: ""
	I0420 01:29:08.423727  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.423739  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:08.423751  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:08.423767  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:08.468422  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:08.468460  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:08.524091  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:08.524125  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:08.540294  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:08.540323  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:08.622439  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:08.622472  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:08.622488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:11.208472  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:11.225412  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:11.225479  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:11.273723  142411 cri.go:89] found id: ""
	I0420 01:29:11.273755  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.273767  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:11.273775  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:11.273840  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:11.316083  142411 cri.go:89] found id: ""
	I0420 01:29:11.316118  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.316130  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:11.316137  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:11.316203  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:11.355632  142411 cri.go:89] found id: ""
	I0420 01:29:11.355659  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.355668  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:11.355674  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:11.355734  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:11.397277  142411 cri.go:89] found id: ""
	I0420 01:29:11.397305  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.397327  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:11.397335  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:11.397399  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:11.439333  142411 cri.go:89] found id: ""
	I0420 01:29:11.439357  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.439366  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:11.439372  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:11.439433  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:11.477044  142411 cri.go:89] found id: ""
	I0420 01:29:11.477072  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.477079  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:11.477086  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:11.477142  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:11.516150  142411 cri.go:89] found id: ""
	I0420 01:29:11.516184  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.516196  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:11.516204  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:11.516274  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:11.557272  142411 cri.go:89] found id: ""
	I0420 01:29:11.557303  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.557331  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:11.557344  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:11.557366  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:11.652272  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:11.652319  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:11.700469  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:11.700504  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:11.756674  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:11.756711  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:11.772377  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:11.772407  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:11.851387  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:10.682669  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:12.684335  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:11.041282  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:13.541986  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:13.393346  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:15.394909  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:14.352257  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:14.367635  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:14.367714  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:14.408757  142411 cri.go:89] found id: ""
	I0420 01:29:14.408779  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.408788  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:14.408794  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:14.408843  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:14.455123  142411 cri.go:89] found id: ""
	I0420 01:29:14.455150  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.455159  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:14.455165  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:14.455239  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:14.499546  142411 cri.go:89] found id: ""
	I0420 01:29:14.499573  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.499581  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:14.499587  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:14.499635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:14.541811  142411 cri.go:89] found id: ""
	I0420 01:29:14.541841  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.541851  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:14.541859  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:14.541923  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:14.586965  142411 cri.go:89] found id: ""
	I0420 01:29:14.586990  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.587001  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:14.587008  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:14.587071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:14.625251  142411 cri.go:89] found id: ""
	I0420 01:29:14.625279  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.625288  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:14.625294  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:14.625377  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:14.665038  142411 cri.go:89] found id: ""
	I0420 01:29:14.665067  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.665079  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:14.665086  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:14.665157  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:14.706931  142411 cri.go:89] found id: ""
	I0420 01:29:14.706964  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.706978  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:14.706992  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:14.707044  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:14.761681  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:14.761717  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:14.776324  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:14.776350  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:14.856707  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:14.856727  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:14.856738  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:14.944019  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:14.944064  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:17.489112  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:17.507594  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:17.507660  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:17.556091  142411 cri.go:89] found id: ""
	I0420 01:29:17.556122  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.556132  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:17.556140  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:17.556205  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:17.600016  142411 cri.go:89] found id: ""
	I0420 01:29:17.600072  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.600086  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:17.600107  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:17.600171  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:17.643074  142411 cri.go:89] found id: ""
	I0420 01:29:17.643106  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.643118  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:17.643125  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:17.643190  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:17.684798  142411 cri.go:89] found id: ""
	I0420 01:29:17.684827  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.684838  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:17.684845  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:17.684910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:17.725451  142411 cri.go:89] found id: ""
	I0420 01:29:17.725481  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.725494  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:17.725503  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:17.725575  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:17.765918  142411 cri.go:89] found id: ""
	I0420 01:29:17.765944  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.765952  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:17.765959  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:17.766023  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:17.806011  142411 cri.go:89] found id: ""
	I0420 01:29:17.806038  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.806049  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:17.806056  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:17.806122  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:17.848409  142411 cri.go:89] found id: ""
	I0420 01:29:17.848441  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.848453  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:17.848465  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:17.848488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:17.903854  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:17.903900  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:17.919156  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:17.919191  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:18.008073  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:18.008115  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:18.008133  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:18.095887  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:18.095929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:14.687917  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:17.182326  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:16.039159  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:18.040487  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.540830  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:17.893270  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.392563  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.646919  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:20.664559  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:20.664635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:20.714440  142411 cri.go:89] found id: ""
	I0420 01:29:20.714472  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.714481  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:20.714487  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:20.714543  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:20.755249  142411 cri.go:89] found id: ""
	I0420 01:29:20.755276  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.755287  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:20.755294  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:20.755355  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:20.795744  142411 cri.go:89] found id: ""
	I0420 01:29:20.795777  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.795786  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:20.795797  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:20.795864  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:20.838083  142411 cri.go:89] found id: ""
	I0420 01:29:20.838111  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.838120  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:20.838128  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:20.838193  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:20.880198  142411 cri.go:89] found id: ""
	I0420 01:29:20.880227  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.880238  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:20.880245  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:20.880312  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:20.920496  142411 cri.go:89] found id: ""
	I0420 01:29:20.920522  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.920530  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:20.920536  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:20.920618  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:20.960137  142411 cri.go:89] found id: ""
	I0420 01:29:20.960170  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.960180  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:20.960186  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:20.960251  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:20.999583  142411 cri.go:89] found id: ""
	I0420 01:29:20.999624  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.999637  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:20.999649  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:20.999665  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:21.077439  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:21.077476  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:21.121104  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:21.121148  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:21.173871  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:21.173909  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:21.189767  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:21.189795  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:21.264715  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:19.682554  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:21.682995  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:22.543452  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:25.040875  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:22.393626  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:24.894279  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:23.765605  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:23.782250  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:23.782334  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:23.827248  142411 cri.go:89] found id: ""
	I0420 01:29:23.827277  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.827285  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:23.827291  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:23.827349  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:23.867610  142411 cri.go:89] found id: ""
	I0420 01:29:23.867636  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.867645  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:23.867651  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:23.867712  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:23.906244  142411 cri.go:89] found id: ""
	I0420 01:29:23.906271  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.906278  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:23.906283  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:23.906343  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:23.952256  142411 cri.go:89] found id: ""
	I0420 01:29:23.952288  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.952306  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:23.952314  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:23.952378  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:23.992843  142411 cri.go:89] found id: ""
	I0420 01:29:23.992879  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.992888  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:23.992896  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:23.992959  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:24.036460  142411 cri.go:89] found id: ""
	I0420 01:29:24.036493  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.036504  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:24.036512  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:24.036582  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:24.075910  142411 cri.go:89] found id: ""
	I0420 01:29:24.075944  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.075955  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:24.075962  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:24.076033  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:24.122638  142411 cri.go:89] found id: ""
	I0420 01:29:24.122676  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.122688  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:24.122698  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:24.122717  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:24.138022  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:24.138061  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:24.220977  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:24.220998  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:24.221012  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:24.302928  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:24.302972  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:24.351237  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:24.351277  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:26.910354  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:26.926815  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:26.926900  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:26.966123  142411 cri.go:89] found id: ""
	I0420 01:29:26.966155  142411 logs.go:276] 0 containers: []
	W0420 01:29:26.966165  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:26.966172  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:26.966246  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:27.011679  142411 cri.go:89] found id: ""
	I0420 01:29:27.011714  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.011727  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:27.011735  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:27.011806  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:27.052116  142411 cri.go:89] found id: ""
	I0420 01:29:27.052141  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.052148  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:27.052155  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:27.052202  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:27.090375  142411 cri.go:89] found id: ""
	I0420 01:29:27.090404  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.090413  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:27.090419  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:27.090476  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:27.131911  142411 cri.go:89] found id: ""
	I0420 01:29:27.131946  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.131957  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:27.131965  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:27.132033  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:27.176663  142411 cri.go:89] found id: ""
	I0420 01:29:27.176696  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.176714  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:27.176723  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:27.176788  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:27.217806  142411 cri.go:89] found id: ""
	I0420 01:29:27.217836  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.217846  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:27.217853  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:27.217917  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:27.253956  142411 cri.go:89] found id: ""
	I0420 01:29:27.253981  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.253989  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:27.253998  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:27.254014  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:27.298225  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:27.298264  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:27.351213  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:27.351259  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:27.366352  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:27.366388  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:27.466716  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:27.466742  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:27.466770  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:24.184743  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:26.681862  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:28.683193  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:27.042377  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:29.539413  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:27.395660  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:29.893947  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:30.050528  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:30.065697  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:30.065769  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:30.104643  142411 cri.go:89] found id: ""
	I0420 01:29:30.104675  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.104686  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:30.104694  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:30.104753  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:30.143864  142411 cri.go:89] found id: ""
	I0420 01:29:30.143892  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.143903  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:30.143910  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:30.143976  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:30.187925  142411 cri.go:89] found id: ""
	I0420 01:29:30.187954  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.187964  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:30.187972  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:30.188035  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:30.227968  142411 cri.go:89] found id: ""
	I0420 01:29:30.227995  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.228003  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:30.228009  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:30.228059  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:30.269550  142411 cri.go:89] found id: ""
	I0420 01:29:30.269584  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.269596  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:30.269604  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:30.269672  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:30.311777  142411 cri.go:89] found id: ""
	I0420 01:29:30.311810  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.311819  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:30.311827  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:30.311878  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:30.353569  142411 cri.go:89] found id: ""
	I0420 01:29:30.353601  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.353610  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:30.353617  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:30.353683  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:30.395003  142411 cri.go:89] found id: ""
	I0420 01:29:30.395032  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.395043  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:30.395054  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:30.395066  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:30.455495  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:30.455536  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:30.473749  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:30.473778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:30.555370  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:30.555397  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:30.555417  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:30.637079  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:30.637124  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:33.188917  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:33.203689  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:33.203757  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:33.246796  142411 cri.go:89] found id: ""
	I0420 01:29:33.246828  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.246840  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:33.246848  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:33.246911  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:33.284667  142411 cri.go:89] found id: ""
	I0420 01:29:33.284700  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.284712  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:33.284720  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:33.284782  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:33.328653  142411 cri.go:89] found id: ""
	I0420 01:29:33.328688  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.328701  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:33.328709  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:33.328777  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:33.369081  142411 cri.go:89] found id: ""
	I0420 01:29:33.369107  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.369121  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:33.369130  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:33.369180  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:30.684861  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:32.689885  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:31.547492  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:34.040445  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:31.894902  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:34.392071  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:33.414282  142411 cri.go:89] found id: ""
	I0420 01:29:33.414313  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.414322  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:33.414327  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:33.414411  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:33.457086  142411 cri.go:89] found id: ""
	I0420 01:29:33.457112  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.457119  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:33.457126  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:33.457176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:33.498686  142411 cri.go:89] found id: ""
	I0420 01:29:33.498716  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.498729  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:33.498738  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:33.498808  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:33.538872  142411 cri.go:89] found id: ""
	I0420 01:29:33.538907  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.538920  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:33.538932  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:33.538959  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:33.592586  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:33.592631  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:33.609200  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:33.609226  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:33.690795  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:33.690820  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:33.690836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:33.776092  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:33.776131  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:36.331256  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:36.348813  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:36.348892  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:36.397503  142411 cri.go:89] found id: ""
	I0420 01:29:36.397527  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.397534  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:36.397540  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:36.397603  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:36.439638  142411 cri.go:89] found id: ""
	I0420 01:29:36.439667  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.439675  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:36.439685  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:36.439761  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:36.477155  142411 cri.go:89] found id: ""
	I0420 01:29:36.477182  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.477194  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:36.477201  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:36.477259  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:36.533326  142411 cri.go:89] found id: ""
	I0420 01:29:36.533360  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.533373  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:36.533381  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:36.533446  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:36.573056  142411 cri.go:89] found id: ""
	I0420 01:29:36.573093  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.573107  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:36.573114  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:36.573177  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:36.611901  142411 cri.go:89] found id: ""
	I0420 01:29:36.611937  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.611949  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:36.611957  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:36.612017  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:36.656780  142411 cri.go:89] found id: ""
	I0420 01:29:36.656810  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.656823  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:36.656830  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:36.656899  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:36.699872  142411 cri.go:89] found id: ""
	I0420 01:29:36.699906  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.699916  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:36.699928  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:36.699943  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:36.758859  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:36.758895  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:36.775108  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:36.775145  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:36.858001  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:36.858027  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:36.858044  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:36.936114  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:36.936154  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:35.182481  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:37.182529  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:36.041125  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:38.043465  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:40.540023  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:36.395316  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:38.894062  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:40.894416  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:39.487167  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:39.502929  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:39.502995  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:39.547338  142411 cri.go:89] found id: ""
	I0420 01:29:39.547363  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.547371  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:39.547377  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:39.547430  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:39.608684  142411 cri.go:89] found id: ""
	I0420 01:29:39.608714  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.608722  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:39.608728  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:39.608793  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:39.679248  142411 cri.go:89] found id: ""
	I0420 01:29:39.679281  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.679292  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:39.679300  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:39.679361  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:39.725226  142411 cri.go:89] found id: ""
	I0420 01:29:39.725257  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.725270  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:39.725278  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:39.725363  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:39.767653  142411 cri.go:89] found id: ""
	I0420 01:29:39.767681  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.767690  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:39.767697  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:39.767760  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:39.807848  142411 cri.go:89] found id: ""
	I0420 01:29:39.807885  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.807893  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:39.807900  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:39.807968  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:39.847171  142411 cri.go:89] found id: ""
	I0420 01:29:39.847201  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.847212  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:39.847219  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:39.847284  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:39.884959  142411 cri.go:89] found id: ""
	I0420 01:29:39.884996  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.885007  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:39.885034  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:39.885050  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:39.959245  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:39.959269  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:39.959286  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:40.041394  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:40.041436  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:40.083125  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:40.083171  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:40.139902  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:40.139957  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:42.657038  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:42.673303  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:42.673407  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:42.717081  142411 cri.go:89] found id: ""
	I0420 01:29:42.717106  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.717114  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:42.717120  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:42.717170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:42.762322  142411 cri.go:89] found id: ""
	I0420 01:29:42.762357  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.762367  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:42.762375  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:42.762442  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:42.805059  142411 cri.go:89] found id: ""
	I0420 01:29:42.805112  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.805122  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:42.805131  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:42.805201  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:42.848539  142411 cri.go:89] found id: ""
	I0420 01:29:42.848568  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.848580  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:42.848587  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:42.848679  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:42.887915  142411 cri.go:89] found id: ""
	I0420 01:29:42.887949  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.887960  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:42.887967  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:42.888032  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:42.938832  142411 cri.go:89] found id: ""
	I0420 01:29:42.938867  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.938878  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:42.938888  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:42.938957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:42.982376  142411 cri.go:89] found id: ""
	I0420 01:29:42.982402  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.982409  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:42.982415  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:42.982477  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:43.023264  142411 cri.go:89] found id: ""
	I0420 01:29:43.023293  142411 logs.go:276] 0 containers: []
	W0420 01:29:43.023301  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:43.023313  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:43.023326  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:43.079673  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:43.079714  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:43.094753  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:43.094786  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:43.180113  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:43.180149  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:43.180177  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:43.259830  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:43.259872  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:39.182568  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:41.186805  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:43.683131  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:42.540687  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.039857  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:43.392948  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.394081  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.802515  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:45.816908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:45.816965  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:45.861091  142411 cri.go:89] found id: ""
	I0420 01:29:45.861123  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.861132  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:45.861138  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:45.861224  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:45.901677  142411 cri.go:89] found id: ""
	I0420 01:29:45.901702  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.901710  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:45.901716  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:45.901767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:45.938301  142411 cri.go:89] found id: ""
	I0420 01:29:45.938325  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.938334  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:45.938339  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:45.938393  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:45.978432  142411 cri.go:89] found id: ""
	I0420 01:29:45.978460  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.978473  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:45.978479  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:45.978537  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:46.019410  142411 cri.go:89] found id: ""
	I0420 01:29:46.019446  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.019455  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:46.019461  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:46.019524  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:46.071002  142411 cri.go:89] found id: ""
	I0420 01:29:46.071032  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.071041  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:46.071052  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:46.071124  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:46.110362  142411 cri.go:89] found id: ""
	I0420 01:29:46.110391  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.110402  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:46.110409  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:46.110477  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:46.152276  142411 cri.go:89] found id: ""
	I0420 01:29:46.152311  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.152322  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:46.152334  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:46.152351  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:46.205121  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:46.205159  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:46.221808  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:46.221842  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:46.300394  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:46.300418  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:46.300434  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:46.391961  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:46.392002  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:45.684038  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:48.176081  141927 pod_ready.go:81] duration metric: took 4m0.00056563s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" ...
	E0420 01:29:48.176112  141927 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:29:48.176130  141927 pod_ready.go:38] duration metric: took 4m7.024291569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:29:48.176166  141927 kubeadm.go:591] duration metric: took 4m16.819079549s to restartPrimaryControlPlane
	W0420 01:29:48.176256  141927 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:29:48.176291  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:29:47.040255  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:49.043956  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:47.893875  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:49.894291  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:48.945086  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:48.961414  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:48.961491  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:49.010230  142411 cri.go:89] found id: ""
	I0420 01:29:49.010285  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.010299  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:49.010309  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:49.010385  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:49.054455  142411 cri.go:89] found id: ""
	I0420 01:29:49.054481  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.054491  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:49.054499  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:49.054566  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:49.094536  142411 cri.go:89] found id: ""
	I0420 01:29:49.094562  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.094572  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:49.094580  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:49.094740  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:49.134004  142411 cri.go:89] found id: ""
	I0420 01:29:49.134035  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.134046  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:49.134054  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:49.134118  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:49.173697  142411 cri.go:89] found id: ""
	I0420 01:29:49.173728  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.173741  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:49.173750  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:49.173817  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:49.215655  142411 cri.go:89] found id: ""
	I0420 01:29:49.215681  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.215689  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:49.215695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:49.215745  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:49.258282  142411 cri.go:89] found id: ""
	I0420 01:29:49.258312  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.258324  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:49.258332  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:49.258394  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:49.298565  142411 cri.go:89] found id: ""
	I0420 01:29:49.298597  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.298608  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:49.298620  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:49.298638  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:49.378833  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:49.378862  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:49.378880  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:49.467477  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:49.467517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:49.521747  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:49.521788  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:49.583386  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:49.583436  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:52.102969  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:52.122971  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:52.123053  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:52.166166  142411 cri.go:89] found id: ""
	I0420 01:29:52.166199  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.166210  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:52.166219  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:52.166287  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:52.206790  142411 cri.go:89] found id: ""
	I0420 01:29:52.206817  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.206824  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:52.206830  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:52.206889  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:52.249879  142411 cri.go:89] found id: ""
	I0420 01:29:52.249911  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.249921  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:52.249931  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:52.249997  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:52.293953  142411 cri.go:89] found id: ""
	I0420 01:29:52.293997  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.294009  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:52.294018  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:52.294095  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:52.339447  142411 cri.go:89] found id: ""
	I0420 01:29:52.339478  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.339490  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:52.339497  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:52.339558  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:52.378383  142411 cri.go:89] found id: ""
	I0420 01:29:52.378416  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.378428  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:52.378435  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:52.378488  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:52.423079  142411 cri.go:89] found id: ""
	I0420 01:29:52.423121  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.423130  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:52.423137  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:52.423205  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:52.459525  142411 cri.go:89] found id: ""
	I0420 01:29:52.459559  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.459572  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:52.459594  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:52.459610  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:52.567141  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:52.567186  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:52.618194  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:52.618235  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:52.681921  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:52.681959  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:52.699065  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:52.699108  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:52.776829  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:51.540922  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:54.043224  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:52.397218  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:54.895147  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:55.277933  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:55.293380  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:55.293455  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:55.337443  142411 cri.go:89] found id: ""
	I0420 01:29:55.337475  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.337483  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:55.337491  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:55.337557  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:55.375911  142411 cri.go:89] found id: ""
	I0420 01:29:55.375942  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.375951  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:55.375957  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:55.376022  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:55.418545  142411 cri.go:89] found id: ""
	I0420 01:29:55.418569  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.418577  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:55.418583  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:55.418635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:55.459343  142411 cri.go:89] found id: ""
	I0420 01:29:55.459378  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.459390  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:55.459397  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:55.459452  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:55.503851  142411 cri.go:89] found id: ""
	I0420 01:29:55.503878  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.503887  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:55.503895  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:55.503959  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:55.542533  142411 cri.go:89] found id: ""
	I0420 01:29:55.542556  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.542562  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:55.542568  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:55.542623  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:55.582205  142411 cri.go:89] found id: ""
	I0420 01:29:55.582236  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.582246  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:55.582252  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:55.582314  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:55.624727  142411 cri.go:89] found id: ""
	I0420 01:29:55.624757  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.624769  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:55.624781  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:55.624803  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:55.675403  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:55.675438  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:55.691492  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:55.691516  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:55.772283  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:55.772313  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:55.772330  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:55.859440  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:55.859477  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:56.543221  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:59.041874  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:57.393723  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:59.894390  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:58.406009  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:58.422305  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:58.422382  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:58.468206  142411 cri.go:89] found id: ""
	I0420 01:29:58.468303  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.468321  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:58.468329  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:58.468402  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:58.513981  142411 cri.go:89] found id: ""
	I0420 01:29:58.514018  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.514027  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:58.514041  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:58.514105  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:58.559967  142411 cri.go:89] found id: ""
	I0420 01:29:58.560000  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.560011  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:58.560019  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:58.560084  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:58.600710  142411 cri.go:89] found id: ""
	I0420 01:29:58.600744  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.600763  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:58.600771  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:58.600834  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:58.645995  142411 cri.go:89] found id: ""
	I0420 01:29:58.646022  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.646030  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:58.646036  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:58.646097  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:58.684930  142411 cri.go:89] found id: ""
	I0420 01:29:58.684957  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.684965  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:58.684972  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:58.685022  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:58.727225  142411 cri.go:89] found id: ""
	I0420 01:29:58.727251  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.727259  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:58.727265  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:58.727319  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:58.765244  142411 cri.go:89] found id: ""
	I0420 01:29:58.765282  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.765293  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:58.765303  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:58.765330  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:58.817791  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:58.817822  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:58.832882  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:58.832926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:58.919297  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:58.919325  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:58.919342  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:59.002590  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:59.002637  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:01.551854  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:01.568974  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:01.569054  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:01.609165  142411 cri.go:89] found id: ""
	I0420 01:30:01.609191  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.609200  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:01.609206  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:01.609272  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:01.653349  142411 cri.go:89] found id: ""
	I0420 01:30:01.653383  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.653396  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:01.653405  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:01.653482  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:01.698961  142411 cri.go:89] found id: ""
	I0420 01:30:01.698991  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.699002  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:01.699009  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:01.699063  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:01.739230  142411 cri.go:89] found id: ""
	I0420 01:30:01.739271  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.739283  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:01.739292  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:01.739376  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:01.781839  142411 cri.go:89] found id: ""
	I0420 01:30:01.781873  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.781885  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:01.781893  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:01.781960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:01.821212  142411 cri.go:89] found id: ""
	I0420 01:30:01.821241  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.821252  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:01.821259  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:01.821339  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:01.859959  142411 cri.go:89] found id: ""
	I0420 01:30:01.859984  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.859993  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:01.859999  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:01.860060  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:01.898832  142411 cri.go:89] found id: ""
	I0420 01:30:01.898858  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.898865  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:01.898875  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:01.898886  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:01.943065  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:01.943156  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:01.995618  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:01.995654  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:02.010489  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:02.010517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:02.090181  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:02.090222  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:02.090238  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:01.541135  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.041977  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:02.394456  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.894450  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.671376  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:04.687535  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:04.687629  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:04.728732  142411 cri.go:89] found id: ""
	I0420 01:30:04.728765  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.728778  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:04.728786  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:04.728854  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:04.768537  142411 cri.go:89] found id: ""
	I0420 01:30:04.768583  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.768602  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:04.768610  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:04.768676  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:04.811714  142411 cri.go:89] found id: ""
	I0420 01:30:04.811741  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.811750  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:04.811756  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:04.811816  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:04.852324  142411 cri.go:89] found id: ""
	I0420 01:30:04.852360  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.852371  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:04.852379  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:04.852452  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:04.891657  142411 cri.go:89] found id: ""
	I0420 01:30:04.891688  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.891700  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:04.891708  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:04.891774  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:04.933192  142411 cri.go:89] found id: ""
	I0420 01:30:04.933222  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.933230  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:04.933236  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:04.933291  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:04.972796  142411 cri.go:89] found id: ""
	I0420 01:30:04.972819  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.972828  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:04.972834  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:04.972888  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:05.014782  142411 cri.go:89] found id: ""
	I0420 01:30:05.014821  142411 logs.go:276] 0 containers: []
	W0420 01:30:05.014833  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:05.014846  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:05.014862  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:05.067438  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:05.067470  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:05.121336  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:05.121371  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:05.137495  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:05.137529  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:05.214132  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:05.214153  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:05.214170  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:07.796964  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:07.810856  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:07.810917  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:07.846993  142411 cri.go:89] found id: ""
	I0420 01:30:07.847024  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.847033  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:07.847040  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:07.847089  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:07.886422  142411 cri.go:89] found id: ""
	I0420 01:30:07.886452  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.886464  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:07.886474  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:07.886567  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:07.942200  142411 cri.go:89] found id: ""
	I0420 01:30:07.942230  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.942238  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:07.942245  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:07.942296  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:07.980179  142411 cri.go:89] found id: ""
	I0420 01:30:07.980215  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.980226  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:07.980235  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:07.980299  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:08.020097  142411 cri.go:89] found id: ""
	I0420 01:30:08.020130  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.020140  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:08.020145  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:08.020215  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:08.063793  142411 cri.go:89] found id: ""
	I0420 01:30:08.063837  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.063848  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:08.063857  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:08.063930  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:08.108674  142411 cri.go:89] found id: ""
	I0420 01:30:08.108705  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.108716  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:08.108724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:08.108798  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:08.147467  142411 cri.go:89] found id: ""
	I0420 01:30:08.147495  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.147503  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:08.147512  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:08.147525  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:08.239416  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:08.239466  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:08.294639  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:08.294669  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:08.349753  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:08.349795  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:08.368971  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:08.369003  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 01:30:06.540958  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:08.541701  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:06.898857  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:09.397590  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	W0420 01:30:08.449996  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:10.950318  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:10.964969  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:10.965032  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:11.006321  142411 cri.go:89] found id: ""
	I0420 01:30:11.006354  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.006365  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:11.006375  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:11.006437  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:11.047982  142411 cri.go:89] found id: ""
	I0420 01:30:11.048010  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.048019  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:11.048025  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:11.048073  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:11.089185  142411 cri.go:89] found id: ""
	I0420 01:30:11.089217  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.089226  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:11.089232  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:11.089287  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:11.131293  142411 cri.go:89] found id: ""
	I0420 01:30:11.131322  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.131335  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:11.131344  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:11.131398  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:11.170394  142411 cri.go:89] found id: ""
	I0420 01:30:11.170419  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.170427  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:11.170432  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:11.170485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:11.210580  142411 cri.go:89] found id: ""
	I0420 01:30:11.210619  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.210631  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:11.210640  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:11.210706  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:11.251938  142411 cri.go:89] found id: ""
	I0420 01:30:11.251977  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.251990  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:11.251998  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:11.252064  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:11.295999  142411 cri.go:89] found id: ""
	I0420 01:30:11.296033  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.296045  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:11.296057  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:11.296072  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:11.378564  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:11.378632  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:11.422836  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:11.422868  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:11.475893  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:11.475928  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:11.491524  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:11.491555  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:11.569066  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:11.041078  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:13.540339  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:15.541762  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:11.893724  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:14.394206  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:14.886464  142057 pod_ready.go:81] duration metric: took 4m0.00077804s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" ...
	E0420 01:30:14.886500  142057 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:30:14.886528  142057 pod_ready.go:38] duration metric: took 4m14.554070758s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:14.886572  142057 kubeadm.go:591] duration metric: took 4m22.173690393s to restartPrimaryControlPlane
	W0420 01:30:14.886657  142057 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:30:14.886691  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:30:14.070158  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:14.086000  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:14.086067  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:14.128864  142411 cri.go:89] found id: ""
	I0420 01:30:14.128894  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.128906  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:14.128914  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:14.128986  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:14.169447  142411 cri.go:89] found id: ""
	I0420 01:30:14.169482  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.169497  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:14.169506  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:14.169583  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:14.210007  142411 cri.go:89] found id: ""
	I0420 01:30:14.210043  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.210054  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:14.210062  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:14.210119  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:14.247652  142411 cri.go:89] found id: ""
	I0420 01:30:14.247685  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.247695  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:14.247703  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:14.247764  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:14.290788  142411 cri.go:89] found id: ""
	I0420 01:30:14.290820  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.290830  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:14.290847  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:14.290908  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:14.351514  142411 cri.go:89] found id: ""
	I0420 01:30:14.351548  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.351570  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:14.351581  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:14.351637  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:14.423481  142411 cri.go:89] found id: ""
	I0420 01:30:14.423520  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.423534  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:14.423543  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:14.423615  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:14.465597  142411 cri.go:89] found id: ""
	I0420 01:30:14.465622  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.465630  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:14.465639  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:14.465655  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:14.522669  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:14.522705  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:14.541258  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:14.541293  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:14.618657  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:14.618678  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:14.618691  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:14.702616  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:14.702658  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:17.256212  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:17.277171  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:17.277250  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:17.321548  142411 cri.go:89] found id: ""
	I0420 01:30:17.321582  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.321600  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:17.321607  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:17.321676  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:17.362856  142411 cri.go:89] found id: ""
	I0420 01:30:17.362883  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.362890  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:17.362896  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:17.362966  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:17.409494  142411 cri.go:89] found id: ""
	I0420 01:30:17.409525  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.409539  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:17.409548  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:17.409631  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:17.447759  142411 cri.go:89] found id: ""
	I0420 01:30:17.447801  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.447812  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:17.447819  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:17.447885  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:17.498416  142411 cri.go:89] found id: ""
	I0420 01:30:17.498444  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.498454  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:17.498460  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:17.498528  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:17.546025  142411 cri.go:89] found id: ""
	I0420 01:30:17.546055  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.546064  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:17.546072  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:17.546138  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:17.585797  142411 cri.go:89] found id: ""
	I0420 01:30:17.585829  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.585840  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:17.585848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:17.585919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:17.630850  142411 cri.go:89] found id: ""
	I0420 01:30:17.630886  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.630899  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:17.630911  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:17.630926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:17.689472  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:17.689510  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:17.705603  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:17.705642  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:17.794094  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:17.794137  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:17.794155  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:17.879397  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:17.879435  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:18.041437  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:20.044174  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:20.428142  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:20.444936  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:20.445018  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:20.487317  142411 cri.go:89] found id: ""
	I0420 01:30:20.487354  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.487365  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:20.487373  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:20.487443  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:20.537209  142411 cri.go:89] found id: ""
	I0420 01:30:20.537241  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.537254  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:20.537262  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:20.537348  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:20.584311  142411 cri.go:89] found id: ""
	I0420 01:30:20.584343  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.584352  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:20.584357  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:20.584413  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:20.631915  142411 cri.go:89] found id: ""
	I0420 01:30:20.631948  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.631959  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:20.631969  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:20.632040  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:20.679680  142411 cri.go:89] found id: ""
	I0420 01:30:20.679707  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.679716  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:20.679721  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:20.679770  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:20.724967  142411 cri.go:89] found id: ""
	I0420 01:30:20.725002  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.725013  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:20.725027  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:20.725091  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:20.772717  142411 cri.go:89] found id: ""
	I0420 01:30:20.772751  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.772762  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:20.772771  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:20.772837  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:20.812421  142411 cri.go:89] found id: ""
	I0420 01:30:20.812449  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.812460  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:20.812471  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:20.812485  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:20.870522  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:20.870554  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:20.886764  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:20.886793  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:20.963941  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:20.963964  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:20.963979  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:21.045738  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:21.045778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:20.850989  141927 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.674674204s)
	I0420 01:30:20.851082  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:20.868537  141927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:20.880284  141927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:20.891650  141927 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:20.891672  141927 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:20.891726  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0420 01:30:20.902443  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:20.902509  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:20.913476  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0420 01:30:20.923762  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:20.923836  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:20.934281  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0420 01:30:20.944194  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:20.944254  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:20.955506  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0420 01:30:20.968039  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:20.968107  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:20.978918  141927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:21.214688  141927 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:30:22.539778  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:24.543547  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:23.600037  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:23.616539  142411 kubeadm.go:591] duration metric: took 4m4.142686832s to restartPrimaryControlPlane
	W0420 01:30:23.616641  142411 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:30:23.616676  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:30:25.481285  142411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.864573977s)
	I0420 01:30:25.481385  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:25.500950  142411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:25.518624  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:25.532506  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:25.532531  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:25.532584  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:30:25.546634  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:25.546708  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:25.561379  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:30:25.575506  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:25.575627  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:25.590615  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:30:25.604855  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:25.604923  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:25.619717  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:30:25.634525  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:25.634607  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:25.649408  142411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:25.735636  142411 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:30:25.735697  142411 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:25.913199  142411 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:25.913347  142411 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:25.913483  142411 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:26.120240  142411 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:26.122066  142411 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:26.122169  142411 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:26.122279  142411 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:26.122395  142411 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:26.122499  142411 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:26.122623  142411 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:26.122715  142411 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:26.122806  142411 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:26.122898  142411 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:26.122999  142411 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:26.123113  142411 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:26.123173  142411 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:26.123244  142411 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:26.243908  142411 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:26.354349  142411 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:26.605778  142411 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:26.833914  142411 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:26.855348  142411 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:26.857029  142411 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:26.857250  142411 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:27.010707  142411 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:27.012314  142411 out.go:204]   - Booting up control plane ...
	I0420 01:30:27.012456  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:27.036284  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:27.049123  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:27.050561  142411 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:27.053222  142411 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:30:30.213456  141927 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:30:30.213557  141927 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:30.213687  141927 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:30.213826  141927 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:30.213915  141927 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:30.213978  141927 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:30.215501  141927 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:30.215594  141927 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:30.215667  141927 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:30.215802  141927 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:30.215886  141927 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:30.215960  141927 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:30.216018  141927 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:30.216097  141927 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:30.216156  141927 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:30.216258  141927 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:30.216350  141927 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:30.216385  141927 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:30.216447  141927 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:30.216517  141927 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:30.216589  141927 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:30:30.216653  141927 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:30.216743  141927 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:30.216832  141927 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:30.216933  141927 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:30.217019  141927 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:30.218228  141927 out.go:204]   - Booting up control plane ...
	I0420 01:30:30.218341  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:30.218446  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:30.218516  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:30.218615  141927 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:30.218703  141927 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:30.218753  141927 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:30.218904  141927 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:30:30.218975  141927 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:30:30.219027  141927 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001925972s
	I0420 01:30:30.219128  141927 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:30:30.219216  141927 kubeadm.go:309] [api-check] The API server is healthy after 5.502367015s
	I0420 01:30:30.219336  141927 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:30:30.219504  141927 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:30:30.219576  141927 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:30:30.219816  141927 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-907988 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:30:30.219880  141927 kubeadm.go:309] [bootstrap-token] Using token: ozlrl4.y5r3psi4bnl35gso
	I0420 01:30:30.221283  141927 out.go:204]   - Configuring RBAC rules ...
	I0420 01:30:30.221416  141927 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:30:30.221533  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:30:30.221728  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:30:30.221968  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:30:30.222146  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:30:30.222255  141927 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:30:30.222385  141927 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:30:30.222455  141927 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:30:30.222524  141927 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:30:30.222534  141927 kubeadm.go:309] 
	I0420 01:30:30.222614  141927 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:30:30.222628  141927 kubeadm.go:309] 
	I0420 01:30:30.222692  141927 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:30:30.222699  141927 kubeadm.go:309] 
	I0420 01:30:30.222723  141927 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:30:30.222772  141927 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:30:30.222815  141927 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:30:30.222821  141927 kubeadm.go:309] 
	I0420 01:30:30.222878  141927 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:30:30.222885  141927 kubeadm.go:309] 
	I0420 01:30:30.222923  141927 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:30:30.222929  141927 kubeadm.go:309] 
	I0420 01:30:30.222994  141927 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:30:30.223100  141927 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:30:30.223171  141927 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:30:30.223189  141927 kubeadm.go:309] 
	I0420 01:30:30.223281  141927 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:30:30.223346  141927 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:30:30.223354  141927 kubeadm.go:309] 
	I0420 01:30:30.223423  141927 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token ozlrl4.y5r3psi4bnl35gso \
	I0420 01:30:30.223527  141927 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:30:30.223552  141927 kubeadm.go:309] 	--control-plane 
	I0420 01:30:30.223559  141927 kubeadm.go:309] 
	I0420 01:30:30.223627  141927 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:30:30.223635  141927 kubeadm.go:309] 
	I0420 01:30:30.223704  141927 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token ozlrl4.y5r3psi4bnl35gso \
	I0420 01:30:30.223811  141927 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:30:30.223826  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:30:30.223833  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:30:30.225184  141927 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:30:27.041383  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:29.540967  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:30.226237  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:30:30.241388  141927 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:30:30.274356  141927 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:30:30.274469  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:30.274503  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-907988 minikube.k8s.io/updated_at=2024_04_20T01_30_30_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=default-k8s-diff-port-907988 minikube.k8s.io/primary=true
	I0420 01:30:30.319402  141927 ops.go:34] apiserver oom_adj: -16
	I0420 01:30:30.505362  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:31.006101  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:31.505679  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.005947  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.505747  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:33.005919  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:33.505449  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:34.006029  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.040710  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:34.541175  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:34.505846  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:35.006187  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:35.505618  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:36.005994  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:36.506217  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.006428  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.506359  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:38.006018  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:38.505454  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:39.006426  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.041157  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:39.542266  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:39.506227  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:40.005941  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:40.506123  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:41.006198  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:41.506244  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:42.006045  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:42.505458  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:43.006082  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:43.122481  141927 kubeadm.go:1107] duration metric: took 12.84807935s to wait for elevateKubeSystemPrivileges
	W0420 01:30:43.122525  141927 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:30:43.122535  141927 kubeadm.go:393] duration metric: took 5m11.83456536s to StartCluster
	I0420 01:30:43.122559  141927 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:30:43.122689  141927 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:30:43.124746  141927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:30:43.125059  141927 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:30:43.126572  141927 out.go:177] * Verifying Kubernetes components...
	I0420 01:30:43.125129  141927 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:30:43.125301  141927 config.go:182] Loaded profile config "default-k8s-diff-port-907988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:30:43.128187  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:30:43.128231  141927 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-907988"
	I0420 01:30:43.128240  141927 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-907988"
	I0420 01:30:43.128277  141927 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-907988"
	I0420 01:30:43.128278  141927 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-907988"
	W0420 01:30:43.128288  141927 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:30:43.128302  141927 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-907988"
	I0420 01:30:43.128352  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.128769  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.128795  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.128840  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.128800  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.128306  141927 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-907988"
	W0420 01:30:43.128994  141927 addons.go:243] addon metrics-server should already be in state true
	I0420 01:30:43.129026  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.129378  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.129401  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.148251  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41797
	I0420 01:30:43.148272  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39865
	I0420 01:30:43.148503  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33785
	I0420 01:30:43.148959  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.148985  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.149060  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.149605  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149626  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.149683  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149688  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149698  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.149706  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.150105  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150108  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150106  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150358  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.150703  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.150733  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.150760  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.150798  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.154242  141927 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-907988"
	W0420 01:30:43.154266  141927 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:30:43.154300  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.154673  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.154715  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.167283  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46477
	I0420 01:30:43.167925  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.168475  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.168496  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.168868  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.169094  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.171067  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45101
	I0420 01:30:43.171384  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.173102  141927 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:30:43.171760  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.172823  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I0420 01:30:43.174639  141927 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:30:43.174661  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:30:43.174681  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.174859  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.175307  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.175331  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.175460  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.175476  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.175799  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.175992  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.176361  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.176376  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.176686  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.178744  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.178848  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.180048  141927 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:30:43.179462  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.181257  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:30:43.181275  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:30:43.181289  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.181296  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.179641  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.182168  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.182437  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.182627  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.184562  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.184958  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.184985  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.185241  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.185430  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.185621  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.185771  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.195778  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I0420 01:30:43.196419  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.196979  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.197002  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.197763  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.198072  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.200177  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.200480  141927 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:30:43.200497  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:30:43.200516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.204078  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.204128  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.204154  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.204178  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.204275  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.204456  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.204582  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.375731  141927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:30:43.424911  141927 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-907988" to be "Ready" ...
	I0420 01:30:43.436729  141927 node_ready.go:49] node "default-k8s-diff-port-907988" has status "Ready":"True"
	I0420 01:30:43.436750  141927 node_ready.go:38] duration metric: took 11.810027ms for node "default-k8s-diff-port-907988" to be "Ready" ...
	I0420 01:30:43.436759  141927 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:43.445452  141927 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:43.497224  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:30:43.526236  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:30:43.527573  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:30:43.527597  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:30:43.591844  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:30:43.591872  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:30:43.655692  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:30:43.655721  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:30:43.824523  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:30:44.808651  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.311370016s)
	I0420 01:30:44.808721  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.808724  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.282444767s)
	I0420 01:30:44.808735  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.808767  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.808783  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809052  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809066  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809074  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.809081  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809144  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809162  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809170  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.809179  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809626  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809635  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809647  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809655  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809626  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Closing plugin on server side
	I0420 01:30:44.833935  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.833963  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.834326  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.834348  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.316084  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.491512905s)
	I0420 01:30:45.316157  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:45.316177  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:45.316514  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:45.316539  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.316593  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:45.316610  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:45.316910  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:45.316989  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.317007  141927 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-907988"
	I0420 01:30:45.316906  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Closing plugin on server side
	I0420 01:30:45.319289  141927 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0420 01:30:42.040865  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:44.042663  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:45.320468  141927 addons.go:505] duration metric: took 2.195343987s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0420 01:30:45.453717  141927 pod_ready.go:102] pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:45.952010  141927 pod_ready.go:92] pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.952032  141927 pod_ready.go:81] duration metric: took 2.506556645s for pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.952040  141927 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.957512  141927 pod_ready.go:92] pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.957533  141927 pod_ready.go:81] duration metric: took 5.486362ms for pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.957541  141927 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.962790  141927 pod_ready.go:92] pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.962810  141927 pod_ready.go:81] duration metric: took 5.261485ms for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.962821  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.968720  141927 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.968743  141927 pod_ready.go:81] duration metric: took 5.914425ms for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.968754  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.976930  141927 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.976946  141927 pod_ready.go:81] duration metric: took 8.183898ms for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.976954  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jt8wr" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.350179  141927 pod_ready.go:92] pod "kube-proxy-jt8wr" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:46.350203  141927 pod_ready.go:81] duration metric: took 373.241134ms for pod "kube-proxy-jt8wr" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.350212  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.749542  141927 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:46.749566  141927 pod_ready.go:81] duration metric: took 399.34726ms for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.749573  141927 pod_ready.go:38] duration metric: took 3.312805349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:46.749587  141927 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:30:46.749647  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:46.785318  141927 api_server.go:72] duration metric: took 3.660207577s to wait for apiserver process to appear ...
	I0420 01:30:46.785349  141927 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:30:46.785373  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:30:46.793933  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 200:
	ok
	I0420 01:30:46.794890  141927 api_server.go:141] control plane version: v1.30.0
	I0420 01:30:46.794911  141927 api_server.go:131] duration metric: took 9.555146ms to wait for apiserver health ...
	I0420 01:30:46.794920  141927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:30:46.953036  141927 system_pods.go:59] 9 kube-system pods found
	I0420 01:30:46.953066  141927 system_pods.go:61] "coredns-7db6d8ff4d-g2nzn" [d07ba546-0251-4862-ad1b-0c3d5ee7b1f3] Running
	I0420 01:30:46.953070  141927 system_pods.go:61] "coredns-7db6d8ff4d-p8dhp" [4bf589b6-f54b-4615-b95e-b95c89766e24] Running
	I0420 01:30:46.953074  141927 system_pods.go:61] "etcd-default-k8s-diff-port-907988" [f2711b7c-9d31-4586-bcf0-345ef2c9e62a] Running
	I0420 01:30:46.953077  141927 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-907988" [7a4fccc8-90d5-4467-8925-df5d8e1e128a] Running
	I0420 01:30:46.953081  141927 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-907988" [68350b12-3244-4565-ab06-6d7ad5876935] Running
	I0420 01:30:46.953085  141927 system_pods.go:61] "kube-proxy-jt8wr" [a9ddf3ce-29f8-437d-bd31-89411c135012] Running
	I0420 01:30:46.953088  141927 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-907988" [f0ff044b-0c2a-4105-9373-34abfbf6b68a] Running
	I0420 01:30:46.953094  141927 system_pods.go:61] "metrics-server-569cc877fc-6rgpj" [70cba472-11c4-4604-a4ad-3575ccedf005] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:30:46.953098  141927 system_pods.go:61] "storage-provisioner" [739478ce-5d74-4be0-8a39-d80245d8aa8a] Running
	I0420 01:30:46.953108  141927 system_pods.go:74] duration metric: took 158.182751ms to wait for pod list to return data ...
	I0420 01:30:46.953116  141927 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:30:47.151205  141927 default_sa.go:45] found service account: "default"
	I0420 01:30:47.151245  141927 default_sa.go:55] duration metric: took 198.121475ms for default service account to be created ...
	I0420 01:30:47.151274  141927 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:30:47.354321  141927 system_pods.go:86] 9 kube-system pods found
	I0420 01:30:47.354348  141927 system_pods.go:89] "coredns-7db6d8ff4d-g2nzn" [d07ba546-0251-4862-ad1b-0c3d5ee7b1f3] Running
	I0420 01:30:47.354353  141927 system_pods.go:89] "coredns-7db6d8ff4d-p8dhp" [4bf589b6-f54b-4615-b95e-b95c89766e24] Running
	I0420 01:30:47.354358  141927 system_pods.go:89] "etcd-default-k8s-diff-port-907988" [f2711b7c-9d31-4586-bcf0-345ef2c9e62a] Running
	I0420 01:30:47.354364  141927 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-907988" [7a4fccc8-90d5-4467-8925-df5d8e1e128a] Running
	I0420 01:30:47.354369  141927 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-907988" [68350b12-3244-4565-ab06-6d7ad5876935] Running
	I0420 01:30:47.354373  141927 system_pods.go:89] "kube-proxy-jt8wr" [a9ddf3ce-29f8-437d-bd31-89411c135012] Running
	I0420 01:30:47.354376  141927 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-907988" [f0ff044b-0c2a-4105-9373-34abfbf6b68a] Running
	I0420 01:30:47.354383  141927 system_pods.go:89] "metrics-server-569cc877fc-6rgpj" [70cba472-11c4-4604-a4ad-3575ccedf005] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:30:47.354387  141927 system_pods.go:89] "storage-provisioner" [739478ce-5d74-4be0-8a39-d80245d8aa8a] Running
	I0420 01:30:47.354395  141927 system_pods.go:126] duration metric: took 203.115923ms to wait for k8s-apps to be running ...
	I0420 01:30:47.354403  141927 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:30:47.354452  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:47.370946  141927 system_svc.go:56] duration metric: took 16.532953ms WaitForService to wait for kubelet
	I0420 01:30:47.370977  141927 kubeadm.go:576] duration metric: took 4.245884115s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:30:47.370997  141927 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:30:47.550097  141927 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:30:47.550127  141927 node_conditions.go:123] node cpu capacity is 2
	I0420 01:30:47.550138  141927 node_conditions.go:105] duration metric: took 179.136105ms to run NodePressure ...
	I0420 01:30:47.550150  141927 start.go:240] waiting for startup goroutines ...
	I0420 01:30:47.550156  141927 start.go:245] waiting for cluster config update ...
	I0420 01:30:47.550167  141927 start.go:254] writing updated cluster config ...
	I0420 01:30:47.550493  141927 ssh_runner.go:195] Run: rm -f paused
	I0420 01:30:47.614715  141927 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:30:47.616658  141927 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-907988" cluster and "default" namespace by default
	I0420 01:30:47.623645  142057 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.736926697s)
	I0420 01:30:47.623716  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:47.648132  142057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:47.662521  142057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:47.674241  142057 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:47.674265  142057 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:47.674311  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:30:47.684981  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:47.685037  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:47.696549  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:30:47.706838  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:47.706885  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:47.717387  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:30:47.732194  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:47.732252  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:47.743425  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:30:47.756579  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:47.756629  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:47.769210  142057 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:47.832909  142057 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:30:47.832972  142057 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:47.987090  142057 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:47.987209  142057 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:47.987380  142057 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:48.253287  142057 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:48.255451  142057 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:48.255552  142057 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:48.255657  142057 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:48.255767  142057 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:48.255880  142057 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:48.255992  142057 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:48.256076  142057 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:48.256170  142057 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:48.256250  142057 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:48.256344  142057 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:48.256445  142057 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:48.256500  142057 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:48.256563  142057 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:48.346357  142057 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:48.602240  142057 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:30:48.741597  142057 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:49.086311  142057 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:49.284340  142057 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:49.284671  142057 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:49.287663  142057 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:46.540199  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:48.540848  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:50.541579  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:49.289305  142057 out.go:204]   - Booting up control plane ...
	I0420 01:30:49.289430  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:49.289558  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:49.289646  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:49.309520  142057 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:49.311328  142057 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:49.311389  142057 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:49.448766  142057 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:30:49.448889  142057 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:30:49.950225  142057 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.460713ms
	I0420 01:30:49.950316  142057 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:30:55.452587  142057 kubeadm.go:309] [api-check] The API server is healthy after 5.502061843s
	I0420 01:30:55.466768  142057 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:30:55.500892  142057 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:30:55.538376  142057 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:30:55.538631  142057 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-269507 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:30:55.559344  142057 kubeadm.go:309] [bootstrap-token] Using token: jtn2hn.nnhc9vssv65463xy
	I0420 01:30:52.542748  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:55.040878  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:55.560872  142057 out.go:204]   - Configuring RBAC rules ...
	I0420 01:30:55.561022  142057 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:30:55.575617  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:30:55.583307  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:30:55.586398  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:30:55.596138  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:30:55.599717  142057 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:30:55.861367  142057 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:30:56.310991  142057 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:30:56.860904  142057 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:30:56.860939  142057 kubeadm.go:309] 
	I0420 01:30:56.861051  142057 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:30:56.861077  142057 kubeadm.go:309] 
	I0420 01:30:56.861180  142057 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:30:56.861201  142057 kubeadm.go:309] 
	I0420 01:30:56.861232  142057 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:30:56.861345  142057 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:30:56.861438  142057 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:30:56.861454  142057 kubeadm.go:309] 
	I0420 01:30:56.861534  142057 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:30:56.861544  142057 kubeadm.go:309] 
	I0420 01:30:56.861628  142057 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:30:56.861644  142057 kubeadm.go:309] 
	I0420 01:30:56.861728  142057 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:30:56.861822  142057 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:30:56.861895  142057 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:30:56.861923  142057 kubeadm.go:309] 
	I0420 01:30:56.862120  142057 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:30:56.862228  142057 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:30:56.862246  142057 kubeadm.go:309] 
	I0420 01:30:56.862371  142057 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jtn2hn.nnhc9vssv65463xy \
	I0420 01:30:56.862532  142057 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:30:56.862571  142057 kubeadm.go:309] 	--control-plane 
	I0420 01:30:56.862580  142057 kubeadm.go:309] 
	I0420 01:30:56.862700  142057 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:30:56.862724  142057 kubeadm.go:309] 
	I0420 01:30:56.862827  142057 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jtn2hn.nnhc9vssv65463xy \
	I0420 01:30:56.862955  142057 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:30:56.863259  142057 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:30:56.863343  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:30:56.863358  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:30:56.865193  142057 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:30:57.541555  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:00.040222  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:56.866515  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:30:56.880013  142057 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:30:56.900677  142057 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:30:56.900773  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:56.900809  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-269507 minikube.k8s.io/updated_at=2024_04_20T01_30_56_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=embed-certs-269507 minikube.k8s.io/primary=true
	I0420 01:30:56.942362  142057 ops.go:34] apiserver oom_adj: -16
	I0420 01:30:57.124807  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:57.625201  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:58.125867  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:58.625845  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:59.124923  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:59.625004  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:00.125467  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:00.625081  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:01.125446  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.539751  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:04.540090  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:01.625279  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.125084  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.625048  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:03.125567  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:03.625428  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:04.125592  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:04.625874  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:05.125031  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:05.625698  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:06.125620  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.054009  142411 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:31:07.054375  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:07.054708  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:06.625682  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.125909  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.625563  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:08.125451  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:08.625265  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.125677  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.625433  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.720318  142057 kubeadm.go:1107] duration metric: took 12.81961115s to wait for elevateKubeSystemPrivileges
	W0420 01:31:09.720362  142057 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:31:09.720373  142057 kubeadm.go:393] duration metric: took 5m17.067399347s to StartCluster
	I0420 01:31:09.720426  142057 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:31:09.720552  142057 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:31:09.722646  142057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:31:09.722904  142057 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:31:09.724771  142057 out.go:177] * Verifying Kubernetes components...
	I0420 01:31:09.722979  142057 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:31:09.723175  142057 config.go:182] Loaded profile config "embed-certs-269507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:31:09.724863  142057 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-269507"
	I0420 01:31:09.726208  142057 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-269507"
	W0420 01:31:09.726229  142057 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:31:09.724870  142057 addons.go:69] Setting default-storageclass=true in profile "embed-certs-269507"
	I0420 01:31:09.726270  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.726289  142057 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-269507"
	I0420 01:31:09.724889  142057 addons.go:69] Setting metrics-server=true in profile "embed-certs-269507"
	I0420 01:31:09.726351  142057 addons.go:234] Setting addon metrics-server=true in "embed-certs-269507"
	W0420 01:31:09.726365  142057 addons.go:243] addon metrics-server should already be in state true
	I0420 01:31:09.726395  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.726159  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:31:09.726699  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726737  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726771  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726785  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.726803  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.726793  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.742932  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41221
	I0420 01:31:09.743143  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I0420 01:31:09.743375  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.743666  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.743951  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.743968  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.744102  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.744120  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.744439  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.744497  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.745152  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.745162  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.745178  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.745195  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.745923  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40633
	I0420 01:31:09.746441  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.747173  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.747202  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.747637  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.747934  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.751736  142057 addons.go:234] Setting addon default-storageclass=true in "embed-certs-269507"
	W0420 01:31:09.751760  142057 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:31:09.751791  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.752174  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.752199  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.763296  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40627
	I0420 01:31:09.763475  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41617
	I0420 01:31:09.764103  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.764119  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.764635  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.764656  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.764807  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.764821  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.765353  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.765369  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.765562  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.766352  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.767675  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.769455  142057 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:31:09.768866  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.770529  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:31:09.770596  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:31:09.770618  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.771959  142057 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:31:07.039635  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:09.040381  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:09.772109  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
	I0420 01:31:09.773531  142057 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:31:09.773545  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:31:09.773560  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.773989  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.774697  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.774711  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.774889  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.775069  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.775522  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.775550  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.775770  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.775840  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.775855  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.775973  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.776144  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.776283  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.776967  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.777306  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.777376  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.777621  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.777811  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.777949  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.778092  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.791609  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37301
	I0420 01:31:09.792008  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.792475  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.792492  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.792811  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.793110  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.794743  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.795008  142057 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:31:09.795023  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:31:09.795037  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.797655  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.798120  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.798144  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.798394  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.798603  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.798745  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.798888  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.957088  142057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:31:10.012344  142057 node_ready.go:35] waiting up to 6m0s for node "embed-certs-269507" to be "Ready" ...
	I0420 01:31:10.023887  142057 node_ready.go:49] node "embed-certs-269507" has status "Ready":"True"
	I0420 01:31:10.023917  142057 node_ready.go:38] duration metric: took 11.536403ms for node "embed-certs-269507" to be "Ready" ...
	I0420 01:31:10.023929  142057 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:10.035096  142057 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:10.210022  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:31:10.222715  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:31:10.251807  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:31:10.251836  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:31:10.342638  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:31:10.342664  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:31:10.480676  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:31:10.480700  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:31:10.655186  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:31:11.331066  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.121005107s)
	I0420 01:31:11.331125  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.108375538s)
	I0420 01:31:11.331139  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331152  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331165  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331181  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331530  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331601  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331611  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331641  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331664  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331681  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331684  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331692  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331699  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331646  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331932  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331959  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331979  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331991  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331989  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.332003  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.364269  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.364296  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.364641  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.364667  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.364671  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.809229  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.154002194s)
	I0420 01:31:11.809282  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.809301  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.809618  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.809676  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.809688  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.809705  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.809717  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.809954  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.809983  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.810001  142057 addons.go:470] Verifying addon metrics-server=true in "embed-certs-269507"
	I0420 01:31:11.810004  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.811610  142057 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0420 01:31:12.055506  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:12.055793  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:11.813049  142057 addons.go:505] duration metric: took 2.090078148s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0420 01:31:12.044618  142057 pod_ready.go:102] pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:12.565519  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.565543  142057 pod_ready.go:81] duration metric: took 2.530392572s for pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.565552  142057 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.577986  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.578011  142057 pod_ready.go:81] duration metric: took 12.452506ms for pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.578020  142057 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.595104  142057 pod_ready.go:92] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.595129  142057 pod_ready.go:81] duration metric: took 17.103577ms for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.595139  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.602502  142057 pod_ready.go:92] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.602524  142057 pod_ready.go:81] duration metric: took 7.377832ms for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.602538  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.608443  142057 pod_ready.go:92] pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.608462  142057 pod_ready.go:81] duration metric: took 5.916781ms for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.608471  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4x66x" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.939418  142057 pod_ready.go:92] pod "kube-proxy-4x66x" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.939444  142057 pod_ready.go:81] duration metric: took 330.966964ms for pod "kube-proxy-4x66x" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.939454  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:13.341528  142057 pod_ready.go:92] pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:13.341556  142057 pod_ready.go:81] duration metric: took 402.093841ms for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:13.341565  142057 pod_ready.go:38] duration metric: took 3.317622631s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:13.341583  142057 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:31:13.341648  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:31:13.361938  142057 api_server.go:72] duration metric: took 3.638999445s to wait for apiserver process to appear ...
	I0420 01:31:13.361967  142057 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:31:13.361987  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:31:13.367149  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0420 01:31:13.368215  142057 api_server.go:141] control plane version: v1.30.0
	I0420 01:31:13.368243  142057 api_server.go:131] duration metric: took 6.268859ms to wait for apiserver health ...
	I0420 01:31:13.368254  142057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:31:13.545177  142057 system_pods.go:59] 9 kube-system pods found
	I0420 01:31:13.545203  142057 system_pods.go:61] "coredns-7db6d8ff4d-ltzhp" [fca2da30-b908-46fc-a028-d43a17c6307e] Running
	I0420 01:31:13.545207  142057 system_pods.go:61] "coredns-7db6d8ff4d-mpf5l" [331105fe-dd08-409f-9b2d-658b958cd1a2] Running
	I0420 01:31:13.545212  142057 system_pods.go:61] "etcd-embed-certs-269507" [7dc38a73-8614-42d0-afb5-f2ffdbb8ef1b] Running
	I0420 01:31:13.545215  142057 system_pods.go:61] "kube-apiserver-embed-certs-269507" [c6741448-01ad-4be4-a120-c69b27fbc818] Running
	I0420 01:31:13.545219  142057 system_pods.go:61] "kube-controller-manager-embed-certs-269507" [003fc040-4032-4ff8-99af-71305dae664c] Running
	I0420 01:31:13.545222  142057 system_pods.go:61] "kube-proxy-4x66x" [75da8306-56f8-49bf-a2e7-cf5d4877dc16] Running
	I0420 01:31:13.545224  142057 system_pods.go:61] "kube-scheduler-embed-certs-269507" [86a64ec5-dd53-4702-9dea-8dbab58b38e3] Running
	I0420 01:31:13.545230  142057 system_pods.go:61] "metrics-server-569cc877fc-jwbst" [4d13a078-f3cd-43c2-8f15-fe5c36445294] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:31:13.545233  142057 system_pods.go:61] "storage-provisioner" [8eee97ab-bb31-4a3d-be80-845b6545e897] Running
	I0420 01:31:13.545242  142057 system_pods.go:74] duration metric: took 176.980813ms to wait for pod list to return data ...
	I0420 01:31:13.545249  142057 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:31:13.739865  142057 default_sa.go:45] found service account: "default"
	I0420 01:31:13.739892  142057 default_sa.go:55] duration metric: took 194.636223ms for default service account to be created ...
	I0420 01:31:13.739903  142057 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:31:13.942758  142057 system_pods.go:86] 9 kube-system pods found
	I0420 01:31:13.942785  142057 system_pods.go:89] "coredns-7db6d8ff4d-ltzhp" [fca2da30-b908-46fc-a028-d43a17c6307e] Running
	I0420 01:31:13.942793  142057 system_pods.go:89] "coredns-7db6d8ff4d-mpf5l" [331105fe-dd08-409f-9b2d-658b958cd1a2] Running
	I0420 01:31:13.942801  142057 system_pods.go:89] "etcd-embed-certs-269507" [7dc38a73-8614-42d0-afb5-f2ffdbb8ef1b] Running
	I0420 01:31:13.942812  142057 system_pods.go:89] "kube-apiserver-embed-certs-269507" [c6741448-01ad-4be4-a120-c69b27fbc818] Running
	I0420 01:31:13.942819  142057 system_pods.go:89] "kube-controller-manager-embed-certs-269507" [003fc040-4032-4ff8-99af-71305dae664c] Running
	I0420 01:31:13.942829  142057 system_pods.go:89] "kube-proxy-4x66x" [75da8306-56f8-49bf-a2e7-cf5d4877dc16] Running
	I0420 01:31:13.942835  142057 system_pods.go:89] "kube-scheduler-embed-certs-269507" [86a64ec5-dd53-4702-9dea-8dbab58b38e3] Running
	I0420 01:31:13.942846  142057 system_pods.go:89] "metrics-server-569cc877fc-jwbst" [4d13a078-f3cd-43c2-8f15-fe5c36445294] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:31:13.942854  142057 system_pods.go:89] "storage-provisioner" [8eee97ab-bb31-4a3d-be80-845b6545e897] Running
	I0420 01:31:13.942863  142057 system_pods.go:126] duration metric: took 202.954629ms to wait for k8s-apps to be running ...
	I0420 01:31:13.942873  142057 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:31:13.942926  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:31:13.962754  142057 system_svc.go:56] duration metric: took 19.872903ms WaitForService to wait for kubelet
	I0420 01:31:13.962781  142057 kubeadm.go:576] duration metric: took 4.239850872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:31:13.962802  142057 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:31:14.139800  142057 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:31:14.139834  142057 node_conditions.go:123] node cpu capacity is 2
	I0420 01:31:14.139848  142057 node_conditions.go:105] duration metric: took 177.041675ms to run NodePressure ...
	I0420 01:31:14.139862  142057 start.go:240] waiting for startup goroutines ...
	I0420 01:31:14.139872  142057 start.go:245] waiting for cluster config update ...
	I0420 01:31:14.139886  142057 start.go:254] writing updated cluster config ...
	I0420 01:31:14.140201  142057 ssh_runner.go:195] Run: rm -f paused
	I0420 01:31:14.190985  142057 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:31:14.193207  142057 out.go:177] * Done! kubectl is now configured to use "embed-certs-269507" cluster and "default" namespace by default
	I0420 01:31:11.040724  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:13.043491  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:15.540182  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:17.540894  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:19.541858  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:22.056094  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:22.056315  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:22.039484  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:24.043137  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:26.043262  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:28.540379  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:30.540568  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:32.543371  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:35.040187  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:37.541354  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:40.039779  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:42.057024  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:42.057278  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:42.040147  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:44.540170  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:46.540576  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:48.543604  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:51.034230  141746 pod_ready.go:81] duration metric: took 4m0.001077028s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" ...
	E0420 01:31:51.034258  141746 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:31:51.034280  141746 pod_ready.go:38] duration metric: took 4m12.046687249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:51.034308  141746 kubeadm.go:591] duration metric: took 4m55.947094434s to restartPrimaryControlPlane
	W0420 01:31:51.034367  141746 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:31:51.034400  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:32:22.058965  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:32:22.059213  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:32:22.059231  142411 kubeadm.go:309] 
	I0420 01:32:22.059284  142411 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:32:22.059341  142411 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:32:22.059351  142411 kubeadm.go:309] 
	I0420 01:32:22.059398  142411 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:32:22.059449  142411 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:32:22.059581  142411 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:32:22.059606  142411 kubeadm.go:309] 
	I0420 01:32:22.059693  142411 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:32:22.059725  142411 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:32:22.059796  142411 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:32:22.059821  142411 kubeadm.go:309] 
	I0420 01:32:22.059916  142411 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:32:22.060046  142411 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:32:22.060068  142411 kubeadm.go:309] 
	I0420 01:32:22.060225  142411 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:32:22.060371  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:32:22.060498  142411 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:32:22.060624  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:32:22.060643  142411 kubeadm.go:309] 
	I0420 01:32:22.061155  142411 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:22.061294  142411 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:32:22.061403  142411 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0420 01:32:22.061569  142411 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0420 01:32:22.061628  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:32:23.211059  142411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.149398853s)
	I0420 01:32:23.211147  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:23.228140  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:32:23.240832  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:32:23.240868  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:32:23.240912  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:32:23.252674  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:32:23.252735  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:32:23.264128  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:32:23.274998  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:32:23.275059  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:32:23.286449  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.297377  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:32:23.297452  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.308971  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:32:23.320775  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:32:23.320842  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:32:23.333601  142411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:32:23.490252  141746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.455825605s)
	I0420 01:32:23.490330  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:23.515027  141746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:32:23.528835  141746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:32:23.542901  141746 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:32:23.542927  141746 kubeadm.go:156] found existing configuration files:
	
	I0420 01:32:23.542969  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:32:23.554931  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:32:23.555006  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:32:23.570665  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:32:23.583505  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:32:23.583576  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:32:23.595835  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.607468  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:32:23.607538  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.620629  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:32:23.634141  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:32:23.634222  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:32:23.648360  141746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:32:23.727697  141746 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:32:23.727825  141746 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:32:23.899280  141746 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:32:23.899376  141746 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:32:23.899456  141746 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:32:24.139299  141746 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:32:24.141410  141746 out.go:204]   - Generating certificates and keys ...
	I0420 01:32:24.141522  141746 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:32:24.141618  141746 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:32:24.141719  141746 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:32:24.141814  141746 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:32:24.141912  141746 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:32:24.141987  141746 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:32:24.142076  141746 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:32:24.142172  141746 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:32:24.142348  141746 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:32:24.142589  141746 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:32:24.142757  141746 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:32:24.142990  141746 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:32:24.247270  141746 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:32:24.326535  141746 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:32:24.538489  141746 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:32:24.594810  141746 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:32:24.712812  141746 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:32:24.713304  141746 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:32:24.719376  141746 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:32:24.721510  141746 out.go:204]   - Booting up control plane ...
	I0420 01:32:24.721649  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:32:24.721781  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:32:24.722470  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:32:24.748410  141746 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:32:24.750247  141746 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:32:24.750320  141746 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:32:24.906734  141746 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:32:24.906859  141746 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:32:25.409625  141746 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.844847ms
	I0420 01:32:25.409771  141746 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:32:23.603058  142411 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:30.912062  141746 kubeadm.go:309] [api-check] The API server is healthy after 5.502434175s
	I0420 01:32:30.935231  141746 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:32:30.954860  141746 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:32:30.990255  141746 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:32:30.990480  141746 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-338118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:32:31.004218  141746 kubeadm.go:309] [bootstrap-token] Using token: 6ub3et.0wyu42zodual4kt8
	I0420 01:32:31.005771  141746 out.go:204]   - Configuring RBAC rules ...
	I0420 01:32:31.005875  141746 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:32:31.011978  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:32:31.020750  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:32:31.024958  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:32:31.032499  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:32:31.037128  141746 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:32:31.320324  141746 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:32:31.761773  141746 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:32:32.322540  141746 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:32:32.322563  141746 kubeadm.go:309] 
	I0420 01:32:32.322633  141746 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:32:32.322648  141746 kubeadm.go:309] 
	I0420 01:32:32.322728  141746 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:32:32.322737  141746 kubeadm.go:309] 
	I0420 01:32:32.322763  141746 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:32:32.322833  141746 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:32:32.322906  141746 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:32:32.322918  141746 kubeadm.go:309] 
	I0420 01:32:32.323005  141746 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:32:32.323015  141746 kubeadm.go:309] 
	I0420 01:32:32.323083  141746 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:32:32.323110  141746 kubeadm.go:309] 
	I0420 01:32:32.323184  141746 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:32:32.323304  141746 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:32:32.323362  141746 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:32:32.323372  141746 kubeadm.go:309] 
	I0420 01:32:32.323522  141746 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:32:32.323660  141746 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:32:32.323677  141746 kubeadm.go:309] 
	I0420 01:32:32.323765  141746 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6ub3et.0wyu42zodual4kt8 \
	I0420 01:32:32.323916  141746 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:32:32.323948  141746 kubeadm.go:309] 	--control-plane 
	I0420 01:32:32.323957  141746 kubeadm.go:309] 
	I0420 01:32:32.324035  141746 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:32:32.324049  141746 kubeadm.go:309] 
	I0420 01:32:32.324201  141746 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6ub3et.0wyu42zodual4kt8 \
	I0420 01:32:32.324348  141746 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:32:32.324967  141746 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:32.325210  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:32:32.325228  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:32:32.327624  141746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:32:32.329029  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:32:32.344181  141746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:32:32.368978  141746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:32:32.369052  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:32.369086  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-338118 minikube.k8s.io/updated_at=2024_04_20T01_32_32_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=no-preload-338118 minikube.k8s.io/primary=true
	I0420 01:32:32.579160  141746 ops.go:34] apiserver oom_adj: -16
	I0420 01:32:32.579218  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:33.079458  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:33.579498  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:34.079957  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:34.579520  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:35.079902  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:35.579955  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:36.079525  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:36.579612  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:37.079831  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:37.579989  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:38.079481  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:38.579798  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:39.080239  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:39.579654  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:40.080267  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:40.579837  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:41.079840  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:41.579347  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:42.079368  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:42.579641  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:43.079257  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:43.579647  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.079317  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.580002  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.698993  141746 kubeadm.go:1107] duration metric: took 12.330007154s to wait for elevateKubeSystemPrivileges
	W0420 01:32:44.699036  141746 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:32:44.699045  141746 kubeadm.go:393] duration metric: took 5m49.674421659s to StartCluster
	I0420 01:32:44.699064  141746 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:32:44.699166  141746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:32:44.700731  141746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:32:44.700982  141746 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:32:44.702752  141746 out.go:177] * Verifying Kubernetes components...
	I0420 01:32:44.701040  141746 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:32:44.701201  141746 config.go:182] Loaded profile config "no-preload-338118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:32:44.704065  141746 addons.go:69] Setting storage-provisioner=true in profile "no-preload-338118"
	I0420 01:32:44.704078  141746 addons.go:69] Setting metrics-server=true in profile "no-preload-338118"
	I0420 01:32:44.704077  141746 addons.go:69] Setting default-storageclass=true in profile "no-preload-338118"
	I0420 01:32:44.704099  141746 addons.go:234] Setting addon storage-provisioner=true in "no-preload-338118"
	W0420 01:32:44.704105  141746 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:32:44.704114  141746 addons.go:234] Setting addon metrics-server=true in "no-preload-338118"
	I0420 01:32:44.704113  141746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-338118"
	W0420 01:32:44.704124  141746 addons.go:243] addon metrics-server should already be in state true
	I0420 01:32:44.704151  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.704157  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.704069  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:32:44.704452  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704485  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.704503  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704521  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704535  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.704545  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.720663  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34001
	I0420 01:32:44.720685  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0420 01:32:44.721210  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.721222  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.721746  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.721766  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.721901  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.721925  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.722282  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.722311  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.722860  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.722860  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.722889  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.722914  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.723194  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39919
	I0420 01:32:44.723775  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.724401  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.724427  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.724790  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.724975  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.728728  141746 addons.go:234] Setting addon default-storageclass=true in "no-preload-338118"
	W0420 01:32:44.728751  141746 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:32:44.728780  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.729136  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.729161  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.738505  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37139
	I0420 01:32:44.738893  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.739388  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.739409  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.739916  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.740120  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.741929  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37217
	I0420 01:32:44.742090  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.744131  141746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:32:44.742538  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.745561  141746 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:32:44.745579  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:32:44.745597  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.744662  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.745640  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.745994  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.746345  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.747491  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0420 01:32:44.747878  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.748594  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.748731  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.748752  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.750445  141746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:32:44.749050  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.749380  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.749990  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.752010  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:32:44.752029  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:32:44.752046  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.752131  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.752155  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.752307  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.752479  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.752647  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.752676  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.752676  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.754727  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.755188  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.755216  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.755497  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.755696  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.755866  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.756034  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.768442  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I0420 01:32:44.768887  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.769453  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.769473  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.769852  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.770359  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.772155  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.772443  141746 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:32:44.772651  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:32:44.772686  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.775775  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.776177  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.776205  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.776313  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.776492  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.776667  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.776832  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.930301  141746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:32:44.948472  141746 node_ready.go:35] waiting up to 6m0s for node "no-preload-338118" to be "Ready" ...
	I0420 01:32:44.960637  141746 node_ready.go:49] node "no-preload-338118" has status "Ready":"True"
	I0420 01:32:44.960664  141746 node_ready.go:38] duration metric: took 12.15407ms for node "no-preload-338118" to be "Ready" ...
	I0420 01:32:44.960676  141746 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:32:44.971143  141746 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.980894  141746 pod_ready.go:92] pod "etcd-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:44.980917  141746 pod_ready.go:81] duration metric: took 9.749994ms for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.980929  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.995192  141746 pod_ready.go:92] pod "kube-apiserver-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:44.995217  141746 pod_ready.go:81] duration metric: took 14.279681ms for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.995229  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.004302  141746 pod_ready.go:92] pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:45.004324  141746 pod_ready.go:81] duration metric: took 9.086713ms for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.004338  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f57d9" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.062482  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:32:45.066314  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:32:45.066334  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:32:45.093830  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:32:45.148558  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:32:45.148600  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:32:45.235321  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:32:45.235349  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:32:45.275661  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:32:46.686292  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.592425062s)
	I0420 01:32:46.686344  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.623774979s)
	I0420 01:32:46.686360  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686375  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686385  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686401  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686822  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.686897  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.686911  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686920  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686835  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.686839  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687001  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.687013  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.687027  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686850  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.687153  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687166  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.687359  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687373  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.697988  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.422274698s)
	I0420 01:32:46.698045  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.698059  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.698320  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.698339  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.698351  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.698359  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.698568  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.698658  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.698676  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.698687  141746 addons.go:470] Verifying addon metrics-server=true in "no-preload-338118"
	I0420 01:32:46.733170  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.733198  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.733551  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.733573  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.733605  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.735297  141746 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0420 01:32:46.736665  141746 addons.go:505] duration metric: took 2.035625149s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0420 01:32:47.011271  141746 pod_ready.go:92] pod "kube-proxy-f57d9" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:47.011299  141746 pod_ready.go:81] duration metric: took 2.006954798s for pod "kube-proxy-f57d9" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.011309  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.025378  141746 pod_ready.go:92] pod "kube-scheduler-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:47.025408  141746 pod_ready.go:81] duration metric: took 14.090474ms for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.025421  141746 pod_ready.go:38] duration metric: took 2.064731781s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:32:47.025443  141746 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:32:47.025511  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:32:47.052680  141746 api_server.go:72] duration metric: took 2.351656586s to wait for apiserver process to appear ...
	I0420 01:32:47.052712  141746 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:32:47.052738  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:32:47.061908  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 200:
	ok
	I0420 01:32:47.065615  141746 api_server.go:141] control plane version: v1.30.0
	I0420 01:32:47.065641  141746 api_server.go:131] duration metric: took 12.920384ms to wait for apiserver health ...
	I0420 01:32:47.065651  141746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:32:47.158039  141746 system_pods.go:59] 9 kube-system pods found
	I0420 01:32:47.158076  141746 system_pods.go:61] "coredns-7db6d8ff4d-8jvsz" [d83784a0-6942-4906-ba66-76d7fa25dc04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.158087  141746 system_pods.go:61] "coredns-7db6d8ff4d-lhnxg" [c0fb3119-abcb-4646-9aae-a54438a76adf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.158096  141746 system_pods.go:61] "etcd-no-preload-338118" [1ff1cf84-276b-45c4-9da9-8266ee15a4f6] Running
	I0420 01:32:47.158101  141746 system_pods.go:61] "kube-apiserver-no-preload-338118" [313150c1-d21e-43d5-8ae0-6331e5007a66] Running
	I0420 01:32:47.158107  141746 system_pods.go:61] "kube-controller-manager-no-preload-338118" [eef34e56-ed71-4e76-a732-341878f3f90d] Running
	I0420 01:32:47.158113  141746 system_pods.go:61] "kube-proxy-f57d9" [54252f52-9bb1-48a2-98e1-980f40fa727d] Running
	I0420 01:32:47.158117  141746 system_pods.go:61] "kube-scheduler-no-preload-338118" [4491c2f0-7b45-4c78-b91e-8fcbbcc890fd] Running
	I0420 01:32:47.158126  141746 system_pods.go:61] "metrics-server-569cc877fc-xbwdm" [798c7b61-a93d-4daf-a832-e15056a2ae24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:32:47.158134  141746 system_pods.go:61] "storage-provisioner" [51c12418-805f-4923-b7ab-4fa0fe07ec9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:32:47.158147  141746 system_pods.go:74] duration metric: took 92.489697ms to wait for pod list to return data ...
	I0420 01:32:47.158162  141746 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:32:47.351962  141746 default_sa.go:45] found service account: "default"
	I0420 01:32:47.352002  141746 default_sa.go:55] duration metric: took 193.830142ms for default service account to be created ...
	I0420 01:32:47.352016  141746 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:32:47.557471  141746 system_pods.go:86] 9 kube-system pods found
	I0420 01:32:47.557511  141746 system_pods.go:89] "coredns-7db6d8ff4d-8jvsz" [d83784a0-6942-4906-ba66-76d7fa25dc04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.557524  141746 system_pods.go:89] "coredns-7db6d8ff4d-lhnxg" [c0fb3119-abcb-4646-9aae-a54438a76adf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.557534  141746 system_pods.go:89] "etcd-no-preload-338118" [1ff1cf84-276b-45c4-9da9-8266ee15a4f6] Running
	I0420 01:32:47.557540  141746 system_pods.go:89] "kube-apiserver-no-preload-338118" [313150c1-d21e-43d5-8ae0-6331e5007a66] Running
	I0420 01:32:47.557547  141746 system_pods.go:89] "kube-controller-manager-no-preload-338118" [eef34e56-ed71-4e76-a732-341878f3f90d] Running
	I0420 01:32:47.557554  141746 system_pods.go:89] "kube-proxy-f57d9" [54252f52-9bb1-48a2-98e1-980f40fa727d] Running
	I0420 01:32:47.557564  141746 system_pods.go:89] "kube-scheduler-no-preload-338118" [4491c2f0-7b45-4c78-b91e-8fcbbcc890fd] Running
	I0420 01:32:47.557577  141746 system_pods.go:89] "metrics-server-569cc877fc-xbwdm" [798c7b61-a93d-4daf-a832-e15056a2ae24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:32:47.557589  141746 system_pods.go:89] "storage-provisioner" [51c12418-805f-4923-b7ab-4fa0fe07ec9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:32:47.557602  141746 system_pods.go:126] duration metric: took 205.577946ms to wait for k8s-apps to be running ...
	I0420 01:32:47.557615  141746 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:32:47.557674  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:47.577745  141746 system_svc.go:56] duration metric: took 20.111982ms WaitForService to wait for kubelet
	I0420 01:32:47.577774  141746 kubeadm.go:576] duration metric: took 2.876759476s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:32:47.577794  141746 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:32:47.753216  141746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:32:47.753246  141746 node_conditions.go:123] node cpu capacity is 2
	I0420 01:32:47.753257  141746 node_conditions.go:105] duration metric: took 175.457668ms to run NodePressure ...
	I0420 01:32:47.753269  141746 start.go:240] waiting for startup goroutines ...
	I0420 01:32:47.753275  141746 start.go:245] waiting for cluster config update ...
	I0420 01:32:47.753286  141746 start.go:254] writing updated cluster config ...
	I0420 01:32:47.753612  141746 ssh_runner.go:195] Run: rm -f paused
	I0420 01:32:47.804681  141746 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:32:47.806823  141746 out.go:177] * Done! kubectl is now configured to use "no-preload-338118" cluster and "default" namespace by default
	I0420 01:34:20.028550  142411 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:34:20.028769  142411 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0420 01:34:20.030749  142411 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:34:20.030826  142411 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:34:20.030947  142411 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:34:20.031078  142411 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:34:20.031217  142411 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:34:20.031319  142411 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:34:20.032927  142411 out.go:204]   - Generating certificates and keys ...
	I0420 01:34:20.033024  142411 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:34:20.033110  142411 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:34:20.033211  142411 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:34:20.033286  142411 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:34:20.033410  142411 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:34:20.033496  142411 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:34:20.033597  142411 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:34:20.033695  142411 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:34:20.033805  142411 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:34:20.033921  142411 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:34:20.033972  142411 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:34:20.034042  142411 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:34:20.034125  142411 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:34:20.034200  142411 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:34:20.034287  142411 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:34:20.034355  142411 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:34:20.034510  142411 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:34:20.034614  142411 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:34:20.034680  142411 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:34:20.034760  142411 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:34:20.036300  142411 out.go:204]   - Booting up control plane ...
	I0420 01:34:20.036380  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:34:20.036479  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:34:20.036583  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:34:20.036705  142411 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:34:20.036888  142411 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:34:20.036955  142411 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:34:20.037046  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037228  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037291  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037494  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037576  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037730  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037789  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037977  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.038044  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.038262  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.038284  142411 kubeadm.go:309] 
	I0420 01:34:20.038341  142411 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:34:20.038382  142411 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:34:20.038396  142411 kubeadm.go:309] 
	I0420 01:34:20.038443  142411 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:34:20.038476  142411 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:34:20.038612  142411 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:34:20.038625  142411 kubeadm.go:309] 
	I0420 01:34:20.038735  142411 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:34:20.038767  142411 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:34:20.038794  142411 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:34:20.038808  142411 kubeadm.go:309] 
	I0420 01:34:20.038902  142411 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:34:20.038977  142411 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:34:20.038987  142411 kubeadm.go:309] 
	I0420 01:34:20.039101  142411 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:34:20.039203  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:34:20.039274  142411 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:34:20.039342  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:34:20.039384  142411 kubeadm.go:309] 
	I0420 01:34:20.039417  142411 kubeadm.go:393] duration metric: took 8m0.622979268s to StartCluster
	I0420 01:34:20.039459  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:34:20.039514  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:34:20.090236  142411 cri.go:89] found id: ""
	I0420 01:34:20.090262  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.090270  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:34:20.090276  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:34:20.090331  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:34:20.133841  142411 cri.go:89] found id: ""
	I0420 01:34:20.133867  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.133875  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:34:20.133883  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:34:20.133955  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:34:20.176186  142411 cri.go:89] found id: ""
	I0420 01:34:20.176219  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.176230  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:34:20.176235  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:34:20.176295  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:34:20.214895  142411 cri.go:89] found id: ""
	I0420 01:34:20.214932  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.214944  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:34:20.214951  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:34:20.215018  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:34:20.257759  142411 cri.go:89] found id: ""
	I0420 01:34:20.257786  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.257795  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:34:20.257800  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:34:20.257857  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:34:20.298111  142411 cri.go:89] found id: ""
	I0420 01:34:20.298153  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.298164  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:34:20.298172  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:34:20.298226  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:34:20.333435  142411 cri.go:89] found id: ""
	I0420 01:34:20.333469  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.333481  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:34:20.333489  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:34:20.333554  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:34:20.370848  142411 cri.go:89] found id: ""
	I0420 01:34:20.370872  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.370880  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:34:20.370890  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:34:20.370902  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:34:20.425495  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:34:20.425536  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:34:20.442039  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:34:20.442066  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:34:20.523456  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:34:20.523483  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:34:20.523504  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:34:20.633387  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:34:20.633427  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0420 01:34:20.688731  142411 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0420 01:34:20.688783  142411 out.go:239] * 
	W0420 01:34:20.688839  142411 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:34:20.688862  142411 out.go:239] * 
	W0420 01:34:20.689758  142411 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0420 01:34:20.693376  142411 out.go:177] 
	W0420 01:34:20.694909  142411 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:34:20.694971  142411 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0420 01:34:20.695003  142411 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0420 01:34:20.696409  142411 out.go:177] 
	
	
	==> CRI-O <==
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.520498191Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713576862520464231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04c387e4-162e-40d8-b91d-3bbca4e4b28b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.522681729Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a3bf0c1-62f5-4aa5-b13d-a2741fe0fb8d name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.522766953Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a3bf0c1-62f5-4aa5-b13d-a2741fe0fb8d name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.522821372Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6a3bf0c1-62f5-4aa5-b13d-a2741fe0fb8d name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.559840700Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76e2b149-7eb2-4c35-995a-0740670f29af name=/runtime.v1.RuntimeService/Version
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.559995812Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76e2b149-7eb2-4c35-995a-0740670f29af name=/runtime.v1.RuntimeService/Version
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.561978240Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92b32515-7534-4a03-8a2a-078804e40e47 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.562339705Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713576862562319224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92b32515-7534-4a03-8a2a-078804e40e47 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.563237850Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0edb5b3-ce18-4523-90cc-7b9219051247 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.563315065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0edb5b3-ce18-4523-90cc-7b9219051247 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.563350974Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b0edb5b3-ce18-4523-90cc-7b9219051247 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.604024430Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d3581c5-7c99-454d-97a8-84037319a77c name=/runtime.v1.RuntimeService/Version
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.604093218Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d3581c5-7c99-454d-97a8-84037319a77c name=/runtime.v1.RuntimeService/Version
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.612177183Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ca319e9-17a5-45bd-8e0b-c17c6341c2f5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.612580502Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713576862612557464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ca319e9-17a5-45bd-8e0b-c17c6341c2f5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.613292839Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a76e4c22-1092-428c-92fc-c48e69423e70 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.613341178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a76e4c22-1092-428c-92fc-c48e69423e70 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.613374602Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a76e4c22-1092-428c-92fc-c48e69423e70 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.654215358Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20bce48c-55aa-4708-9cc4-0ecf4673167a name=/runtime.v1.RuntimeService/Version
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.654316617Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20bce48c-55aa-4708-9cc4-0ecf4673167a name=/runtime.v1.RuntimeService/Version
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.656214346Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=62302175-517b-4a76-9134-d7bad275d023 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.656670401Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713576862656647662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62302175-517b-4a76-9134-d7bad275d023 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.657428432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34a3f8c6-b717-4de5-884c-b341a629cb93 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.657511681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=34a3f8c6-b717-4de5-884c-b341a629cb93 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:34:22 old-k8s-version-564860 crio[649]: time="2024-04-20 01:34:22.657578074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=34a3f8c6-b717-4de5-884c-b341a629cb93 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr20 01:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057920] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044405] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.872024] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.695018] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Apr20 01:26] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.212298] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.068714] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074791] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.229235] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.132751] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.310058] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +7.263827] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.070157] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.032326] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[  +8.834390] kauditd_printk_skb: 46 callbacks suppressed
	[Apr20 01:30] systemd-fstab-generator[5001]: Ignoring "noauto" option for root device
	[Apr20 01:32] systemd-fstab-generator[5277]: Ignoring "noauto" option for root device
	[  +0.067931] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:34:22 up 8 min,  0 users,  load average: 0.09, 0.16, 0.10
	Linux old-k8s-version-564860 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 20 01:34:20 old-k8s-version-564860 kubelet[5460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Apr 20 01:34:20 old-k8s-version-564860 kubelet[5460]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Apr 20 01:34:20 old-k8s-version-564860 kubelet[5460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Apr 20 01:34:20 old-k8s-version-564860 kubelet[5460]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000bab6f0)
	Apr 20 01:34:20 old-k8s-version-564860 kubelet[5460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Apr 20 01:34:20 old-k8s-version-564860 kubelet[5460]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bdfef0, 0x4f0ac20, 0xc000b8c9b0, 0x1, 0xc0001000c0)
	Apr 20 01:34:20 old-k8s-version-564860 kubelet[5460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Apr 20 01:34:20 old-k8s-version-564860 kubelet[5460]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0002547e0, 0xc0001000c0)
	Apr 20 01:34:20 old-k8s-version-564860 kubelet[5460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 20 01:34:20 old-k8s-version-564860 kubelet[5460]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 20 01:34:20 old-k8s-version-564860 kubelet[5460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 20 01:34:20 old-k8s-version-564860 kubelet[5460]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b15050, 0xc000bbc620)
	Apr 20 01:34:20 old-k8s-version-564860 kubelet[5460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 20 01:34:20 old-k8s-version-564860 kubelet[5460]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 20 01:34:20 old-k8s-version-564860 kubelet[5460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 20 01:34:20 old-k8s-version-564860 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 20 01:34:20 old-k8s-version-564860 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 20 01:34:21 old-k8s-version-564860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Apr 20 01:34:21 old-k8s-version-564860 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 20 01:34:21 old-k8s-version-564860 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 20 01:34:21 old-k8s-version-564860 kubelet[5534]: I0420 01:34:21.674941    5534 server.go:416] Version: v1.20.0
	Apr 20 01:34:21 old-k8s-version-564860 kubelet[5534]: I0420 01:34:21.675402    5534 server.go:837] Client rotation is on, will bootstrap in background
	Apr 20 01:34:21 old-k8s-version-564860 kubelet[5534]: I0420 01:34:21.677840    5534 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 20 01:34:21 old-k8s-version-564860 kubelet[5534]: W0420 01:34:21.679438    5534 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 20 01:34:21 old-k8s-version-564860 kubelet[5534]: I0420 01:34:21.679620    5534 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-564860 -n old-k8s-version-564860
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-564860 -n old-k8s-version-564860: exit status 2 (269.322928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-564860" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (770.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0420 01:31:12.218237   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-907988 -n default-k8s-diff-port-907988
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-20 01:39:48.23806592 +0000 UTC m=+6161.752933299
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-907988 -n default-k8s-diff-port-907988
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-907988 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-907988 logs -n 25: (2.179450225s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-831611                               | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-831611                               | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-172352 | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | disable-driver-mounts-172352                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:17 UTC |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-338118             | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:17 UTC | 20 Apr 24 01:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-907988  | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC | 20 Apr 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC |                     |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-269507            | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC | 20 Apr 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-564860        | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-338118                  | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-907988       | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:30 UTC |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-269507                 | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-564860             | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 01:21:33
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 01:21:33.400343  142411 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:21:33.400444  142411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:21:33.400452  142411 out.go:304] Setting ErrFile to fd 2...
	I0420 01:21:33.400464  142411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:21:33.400681  142411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:21:33.401213  142411 out.go:298] Setting JSON to false
	I0420 01:21:33.402151  142411 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14640,"bootTime":1713561453,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 01:21:33.402214  142411 start.go:139] virtualization: kvm guest
	I0420 01:21:33.404200  142411 out.go:177] * [old-k8s-version-564860] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 01:21:33.405933  142411 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:21:33.407240  142411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:21:33.405946  142411 notify.go:220] Checking for updates...
	I0420 01:21:33.408693  142411 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:21:33.409906  142411 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:21:33.411155  142411 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 01:21:33.412528  142411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:21:33.414062  142411 config.go:182] Loaded profile config "old-k8s-version-564860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:21:33.414460  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:21:33.414524  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:21:33.428987  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37585
	I0420 01:21:33.429348  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:21:33.429850  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:21:33.429873  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:21:33.430178  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:21:33.430370  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:21:33.431825  142411 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0420 01:21:33.432895  142411 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:21:33.433209  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:21:33.433251  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:21:33.447157  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42815
	I0420 01:21:33.447543  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:21:33.448080  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:21:33.448123  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:21:33.448444  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:21:33.448609  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:21:33.481664  142411 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 01:21:33.482784  142411 start.go:297] selected driver: kvm2
	I0420 01:21:33.482796  142411 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-5
64860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:21:33.482903  142411 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:21:33.483572  142411 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:21:33.483646  142411 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 01:21:33.497421  142411 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 01:21:33.497790  142411 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:21:33.497854  142411 cni.go:84] Creating CNI manager for ""
	I0420 01:21:33.497869  142411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:21:33.497915  142411 start.go:340] cluster config:
	{Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:21:33.498027  142411 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:21:33.499624  142411 out.go:177] * Starting "old-k8s-version-564860" primary control-plane node in "old-k8s-version-564860" cluster
	I0420 01:21:33.500874  142411 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:21:33.500901  142411 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0420 01:21:33.500914  142411 cache.go:56] Caching tarball of preloaded images
	I0420 01:21:33.500992  142411 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 01:21:33.501007  142411 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0420 01:21:33.501116  142411 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/config.json ...
	I0420 01:21:33.501613  142411 start.go:360] acquireMachinesLock for old-k8s-version-564860: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:21:35.817529  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:38.889617  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:44.969590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:48.041555  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:54.121550  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:57.193604  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:03.273575  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:06.345487  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:12.425567  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:15.497538  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:21.577563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:24.649534  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:30.729573  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:33.801566  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:39.881590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:42.953591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:49.033641  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:52.105579  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:58.185591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:01.257655  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:07.337585  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:10.409568  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:16.489562  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:19.561602  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:25.641579  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:28.713581  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:34.793618  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:37.865643  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:43.945593  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:47.017561  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:53.097597  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:56.169538  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:02.249561  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:05.321557  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:11.401563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:14.473539  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:20.553591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:23.625573  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:29.705563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:32.777590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:38.857568  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:41.929619  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:48.009565  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:51.081536  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:57.161593  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:25:00.233633  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:25:03.237801  141927 start.go:364] duration metric: took 4m24.096402827s to acquireMachinesLock for "default-k8s-diff-port-907988"
	I0420 01:25:03.237873  141927 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:03.237883  141927 fix.go:54] fixHost starting: 
	I0420 01:25:03.238412  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:03.238453  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:03.254029  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36295
	I0420 01:25:03.254570  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:03.255071  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:25:03.255097  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:03.255474  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:03.255703  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:03.255871  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:25:03.257395  141927 fix.go:112] recreateIfNeeded on default-k8s-diff-port-907988: state=Stopped err=<nil>
	I0420 01:25:03.257430  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	W0420 01:25:03.257577  141927 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:03.259083  141927 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-907988" ...
	I0420 01:25:03.260199  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Start
	I0420 01:25:03.260402  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring networks are active...
	I0420 01:25:03.261176  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring network default is active
	I0420 01:25:03.261553  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring network mk-default-k8s-diff-port-907988 is active
	I0420 01:25:03.262016  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Getting domain xml...
	I0420 01:25:03.262834  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Creating domain...
	I0420 01:25:03.235208  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:03.235275  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:25:03.235620  141746 buildroot.go:166] provisioning hostname "no-preload-338118"
	I0420 01:25:03.235653  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:25:03.235902  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:25:03.237636  141746 machine.go:97] duration metric: took 4m37.412949021s to provisionDockerMachine
	I0420 01:25:03.237677  141746 fix.go:56] duration metric: took 4m37.433896084s for fixHost
	I0420 01:25:03.237685  141746 start.go:83] releasing machines lock for "no-preload-338118", held for 4m37.433927307s
	W0420 01:25:03.237715  141746 start.go:713] error starting host: provision: host is not running
	W0420 01:25:03.237980  141746 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0420 01:25:03.238076  141746 start.go:728] Will try again in 5 seconds ...
	I0420 01:25:04.453535  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting to get IP...
	I0420 01:25:04.454427  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.454803  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.454886  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.454785  143129 retry.go:31] will retry after 205.593849ms: waiting for machine to come up
	I0420 01:25:04.662560  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.663106  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.663133  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.663007  143129 retry.go:31] will retry after 246.821866ms: waiting for machine to come up
	I0420 01:25:04.911578  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.912067  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.912100  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.912014  143129 retry.go:31] will retry after 478.36287ms: waiting for machine to come up
	I0420 01:25:05.391624  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.392018  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.392063  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:05.391965  143129 retry.go:31] will retry after 495.387005ms: waiting for machine to come up
	I0420 01:25:05.888569  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.889093  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.889116  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:05.889009  143129 retry.go:31] will retry after 721.867239ms: waiting for machine to come up
	I0420 01:25:06.613018  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:06.613550  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:06.613583  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:06.613495  143129 retry.go:31] will retry after 724.502229ms: waiting for machine to come up
	I0420 01:25:07.339473  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:07.339924  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:07.339974  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:07.339883  143129 retry.go:31] will retry after 916.936196ms: waiting for machine to come up
	I0420 01:25:08.258657  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:08.259033  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:08.259064  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:08.258981  143129 retry.go:31] will retry after 1.088675043s: waiting for machine to come up
	I0420 01:25:08.239597  141746 start.go:360] acquireMachinesLock for no-preload-338118: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:25:09.349021  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:09.349421  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:09.349453  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:09.349362  143129 retry.go:31] will retry after 1.139610002s: waiting for machine to come up
	I0420 01:25:10.490715  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:10.491162  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:10.491190  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:10.491119  143129 retry.go:31] will retry after 1.625829976s: waiting for machine to come up
	I0420 01:25:12.118751  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:12.119231  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:12.119254  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:12.119184  143129 retry.go:31] will retry after 2.904309002s: waiting for machine to come up
	I0420 01:25:15.025713  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:15.026281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:15.026310  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:15.026227  143129 retry.go:31] will retry after 3.471792967s: waiting for machine to come up
	I0420 01:25:18.500247  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:18.500626  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:18.500679  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:18.500595  143129 retry.go:31] will retry after 4.499766051s: waiting for machine to come up
	I0420 01:25:23.005446  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.005935  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Found IP for machine: 192.168.39.222
	I0420 01:25:23.005956  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Reserving static IP address...
	I0420 01:25:23.005970  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has current primary IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.006453  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-907988", mac: "52:54:00:c7:22:6d", ip: "192.168.39.222"} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.006479  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Reserved static IP address: 192.168.39.222
	I0420 01:25:23.006513  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | skip adding static IP to network mk-default-k8s-diff-port-907988 - found existing host DHCP lease matching {name: "default-k8s-diff-port-907988", mac: "52:54:00:c7:22:6d", ip: "192.168.39.222"}
	I0420 01:25:23.006537  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for SSH to be available...
	I0420 01:25:23.006544  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Getting to WaitForSSH function...
	I0420 01:25:23.009090  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.009505  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.009537  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.009658  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Using SSH client type: external
	I0420 01:25:23.009695  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa (-rw-------)
	I0420 01:25:23.009732  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:25:23.009748  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | About to run SSH command:
	I0420 01:25:23.009766  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | exit 0
	I0420 01:25:23.133489  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | SSH cmd err, output: <nil>: 
	I0420 01:25:23.133940  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetConfigRaw
	I0420 01:25:23.134589  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:23.137340  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.137685  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.137708  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.138000  141927 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/config.json ...
	I0420 01:25:23.138228  141927 machine.go:94] provisionDockerMachine start ...
	I0420 01:25:23.138253  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:23.138461  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.140536  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.140815  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.140841  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.141024  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.141244  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.141450  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.141595  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.141777  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.142053  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.142067  141927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:25:23.249946  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:25:23.249979  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.250250  141927 buildroot.go:166] provisioning hostname "default-k8s-diff-port-907988"
	I0420 01:25:23.250280  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.250483  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.253030  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.253422  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.253456  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.253564  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.253755  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.253978  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.254135  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.254334  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.254504  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.254517  141927 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-907988 && echo "default-k8s-diff-port-907988" | sudo tee /etc/hostname
	I0420 01:25:23.379061  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-907988
	
	I0420 01:25:23.379092  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.381893  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.382249  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.382278  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.382465  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.382666  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.382831  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.382939  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.383118  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.383324  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.383349  141927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-907988' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-907988/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-907988' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:25:23.499869  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:23.499903  141927 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:25:23.499932  141927 buildroot.go:174] setting up certificates
	I0420 01:25:23.499941  141927 provision.go:84] configureAuth start
	I0420 01:25:23.499950  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.500178  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:23.502735  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.503050  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.503085  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.503201  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.505586  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.505924  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.505968  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.506036  141927 provision.go:143] copyHostCerts
	I0420 01:25:23.506136  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:25:23.506150  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:25:23.506233  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:25:23.506386  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:25:23.506396  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:25:23.506444  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:25:23.506525  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:25:23.506536  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:25:23.506569  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:25:23.506640  141927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-907988 san=[127.0.0.1 192.168.39.222 default-k8s-diff-port-907988 localhost minikube]
	I0420 01:25:23.598855  141927 provision.go:177] copyRemoteCerts
	I0420 01:25:23.598930  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:25:23.598967  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.602183  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.602516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.602544  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.602696  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.602903  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.603143  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.603301  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:23.688294  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:25:23.714719  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0420 01:25:23.744530  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:25:23.774733  141927 provision.go:87] duration metric: took 274.778779ms to configureAuth
	I0420 01:25:23.774756  141927 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:25:23.774990  141927 config.go:182] Loaded profile config "default-k8s-diff-port-907988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:25:23.775083  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.777817  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.778179  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.778213  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.778376  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.778596  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.778763  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.778984  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.779167  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.779364  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.779393  141927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:25:24.314463  142057 start.go:364] duration metric: took 4m32.915907541s to acquireMachinesLock for "embed-certs-269507"
	I0420 01:25:24.314618  142057 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:24.314645  142057 fix.go:54] fixHost starting: 
	I0420 01:25:24.315169  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:24.315220  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:24.331820  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43949
	I0420 01:25:24.332243  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:24.332707  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:25:24.332730  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:24.333157  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:24.333371  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:24.333551  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:25:24.335004  142057 fix.go:112] recreateIfNeeded on embed-certs-269507: state=Stopped err=<nil>
	I0420 01:25:24.335044  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	W0420 01:25:24.335211  142057 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:24.337246  142057 out.go:177] * Restarting existing kvm2 VM for "embed-certs-269507" ...
	I0420 01:25:24.056795  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:25:24.056832  141927 machine.go:97] duration metric: took 918.585863ms to provisionDockerMachine
	I0420 01:25:24.056849  141927 start.go:293] postStartSetup for "default-k8s-diff-port-907988" (driver="kvm2")
	I0420 01:25:24.056865  141927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:25:24.056889  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.057250  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:25:24.057281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.060602  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.060992  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.061028  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.061196  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.061422  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.061631  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.061785  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.152109  141927 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:25:24.157292  141927 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:25:24.157330  141927 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:25:24.157397  141927 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:25:24.157490  141927 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:25:24.157606  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:25:24.171039  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:24.201343  141927 start.go:296] duration metric: took 144.476748ms for postStartSetup
	I0420 01:25:24.201383  141927 fix.go:56] duration metric: took 20.963499628s for fixHost
	I0420 01:25:24.201409  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.204283  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.204648  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.204681  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.204842  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.205022  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.205204  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.205411  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.205732  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:24.206255  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:24.206269  141927 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:25:24.314311  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576324.296261493
	
	I0420 01:25:24.314336  141927 fix.go:216] guest clock: 1713576324.296261493
	I0420 01:25:24.314346  141927 fix.go:229] Guest: 2024-04-20 01:25:24.296261493 +0000 UTC Remote: 2024-04-20 01:25:24.201388226 +0000 UTC m=+285.207728057 (delta=94.873267ms)
	I0420 01:25:24.314373  141927 fix.go:200] guest clock delta is within tolerance: 94.873267ms
	I0420 01:25:24.314380  141927 start.go:83] releasing machines lock for "default-k8s-diff-port-907988", held for 21.076529311s
	I0420 01:25:24.314420  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.314699  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:24.317281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.317696  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.317731  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.317858  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318364  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318557  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318664  141927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:25:24.318723  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.318833  141927 ssh_runner.go:195] Run: cat /version.json
	I0420 01:25:24.318862  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.321519  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321572  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321937  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.321968  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321994  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.322011  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.322121  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.322233  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.322323  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.322502  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.322516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.322725  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.322730  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.322871  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.403742  141927 ssh_runner.go:195] Run: systemctl --version
	I0420 01:25:24.429207  141927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:25:24.590621  141927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:25:24.597818  141927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:25:24.597890  141927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:25:24.617031  141927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:25:24.617050  141927 start.go:494] detecting cgroup driver to use...
	I0420 01:25:24.617126  141927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:25:24.643134  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:25:24.658222  141927 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:25:24.658275  141927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:25:24.672409  141927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:25:24.686722  141927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:25:24.810871  141927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:25:24.965702  141927 docker.go:233] disabling docker service ...
	I0420 01:25:24.965765  141927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:25:24.984504  141927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:25:24.999580  141927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:25:25.151023  141927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:25:25.278443  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:25:25.295439  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:25:25.316425  141927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:25:25.316494  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.329052  141927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:25:25.329119  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.342102  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.354831  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.368084  141927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:25:25.380515  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.392952  141927 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.411707  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.423776  141927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:25:25.434175  141927 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:25:25.434234  141927 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:25:25.449180  141927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:25:25.460018  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:25.579669  141927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:25:25.741777  141927 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:25:25.741854  141927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:25:25.747422  141927 start.go:562] Will wait 60s for crictl version
	I0420 01:25:25.747478  141927 ssh_runner.go:195] Run: which crictl
	I0420 01:25:25.752164  141927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:25:25.800400  141927 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:25:25.800491  141927 ssh_runner.go:195] Run: crio --version
	I0420 01:25:25.832099  141927 ssh_runner.go:195] Run: crio --version
	I0420 01:25:25.865692  141927 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:25:24.338547  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Start
	I0420 01:25:24.338743  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring networks are active...
	I0420 01:25:24.339527  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring network default is active
	I0420 01:25:24.340064  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring network mk-embed-certs-269507 is active
	I0420 01:25:24.340520  142057 main.go:141] libmachine: (embed-certs-269507) Getting domain xml...
	I0420 01:25:24.341363  142057 main.go:141] libmachine: (embed-certs-269507) Creating domain...
	I0420 01:25:25.566725  142057 main.go:141] libmachine: (embed-certs-269507) Waiting to get IP...
	I0420 01:25:25.567704  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:25.568195  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:25.568263  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:25.568160  143271 retry.go:31] will retry after 229.672507ms: waiting for machine to come up
	I0420 01:25:25.799515  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:25.799964  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:25.799994  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:25.799916  143271 retry.go:31] will retry after 352.048372ms: waiting for machine to come up
	I0420 01:25:26.153710  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:26.154217  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:26.154245  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:26.154159  143271 retry.go:31] will retry after 451.404487ms: waiting for machine to come up
	I0420 01:25:25.867283  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:25.870225  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:25.870725  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:25.870748  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:25.871001  141927 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0420 01:25:25.875986  141927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:25.890923  141927 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-907988 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907
988 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:25:25.891043  141927 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:25:25.891088  141927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:25.934665  141927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:25:25.934743  141927 ssh_runner.go:195] Run: which lz4
	I0420 01:25:25.939157  141927 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:25:25.943759  141927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:25:25.943788  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:25:27.674416  141927 crio.go:462] duration metric: took 1.735279369s to copy over tarball
	I0420 01:25:27.674484  141927 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:25:26.607751  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:26.608327  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:26.608362  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:26.608273  143271 retry.go:31] will retry after 548.149542ms: waiting for machine to come up
	I0420 01:25:27.157746  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:27.158193  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:27.158220  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:27.158158  143271 retry.go:31] will retry after 543.066807ms: waiting for machine to come up
	I0420 01:25:27.702417  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:27.702812  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:27.702842  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:27.702778  143271 retry.go:31] will retry after 801.842999ms: waiting for machine to come up
	I0420 01:25:28.505673  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:28.506233  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:28.506264  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:28.506169  143271 retry.go:31] will retry after 1.176665861s: waiting for machine to come up
	I0420 01:25:29.684134  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:29.684642  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:29.684676  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:29.684582  143271 retry.go:31] will retry after 1.09397916s: waiting for machine to come up
	I0420 01:25:30.780467  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:30.780962  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:30.780987  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:30.780924  143271 retry.go:31] will retry after 1.560706704s: waiting for machine to come up
	I0420 01:25:30.280138  141927 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.605620888s)
	I0420 01:25:30.280235  141927 crio.go:469] duration metric: took 2.605784372s to extract the tarball
	I0420 01:25:30.280269  141927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:25:30.323590  141927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:30.384053  141927 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:25:30.384083  141927 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:25:30.384094  141927 kubeadm.go:928] updating node { 192.168.39.222 8444 v1.30.0 crio true true} ...
	I0420 01:25:30.384258  141927 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-907988 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907988 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:25:30.384347  141927 ssh_runner.go:195] Run: crio config
	I0420 01:25:30.431033  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:25:30.431059  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:30.431074  141927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:25:30.431094  141927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.222 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-907988 NodeName:default-k8s-diff-port-907988 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:25:30.431267  141927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.222
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-907988"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:25:30.431327  141927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:25:30.444735  141927 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:25:30.444807  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:25:30.457543  141927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0420 01:25:30.477858  141927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:25:30.497632  141927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0420 01:25:30.518062  141927 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I0420 01:25:30.522820  141927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:30.538677  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:30.686290  141927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:25:30.721316  141927 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988 for IP: 192.168.39.222
	I0420 01:25:30.721342  141927 certs.go:194] generating shared ca certs ...
	I0420 01:25:30.721373  141927 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:25:30.721607  141927 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:25:30.721664  141927 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:25:30.721679  141927 certs.go:256] generating profile certs ...
	I0420 01:25:30.721789  141927 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/client.key
	I0420 01:25:30.721873  141927 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.key.b8de10ae
	I0420 01:25:30.721912  141927 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.key
	I0420 01:25:30.722019  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:25:30.722052  141927 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:25:30.722067  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:25:30.722094  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:25:30.722122  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:25:30.722144  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:25:30.722189  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:30.723048  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:25:30.762666  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:25:30.800218  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:25:30.849282  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:25:30.893355  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0420 01:25:30.924642  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:25:30.956734  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:25:30.986491  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:25:31.015876  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:25:31.043860  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:25:31.073822  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:25:31.100731  141927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:25:31.119908  141927 ssh_runner.go:195] Run: openssl version
	I0420 01:25:31.128209  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:25:31.140164  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.145371  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.145432  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.151726  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:25:31.163371  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:25:31.175115  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.180237  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.180286  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.186548  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:25:31.198703  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:25:31.211529  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.217258  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.217326  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.223822  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:25:31.236363  141927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:25:31.241793  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:25:31.250826  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:25:31.259850  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:25:31.267387  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:25:31.274477  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:25:31.281452  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:25:31.287980  141927 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-907988 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907988
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:25:31.288094  141927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:25:31.288159  141927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:31.344552  141927 cri.go:89] found id: ""
	I0420 01:25:31.344646  141927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:25:31.357049  141927 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:25:31.357075  141927 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:25:31.357081  141927 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:25:31.357147  141927 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:25:31.368636  141927 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:25:31.370055  141927 kubeconfig.go:125] found "default-k8s-diff-port-907988" server: "https://192.168.39.222:8444"
	I0420 01:25:31.373063  141927 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:25:31.384821  141927 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.222
	I0420 01:25:31.384861  141927 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:25:31.384876  141927 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:25:31.384946  141927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:31.432801  141927 cri.go:89] found id: ""
	I0420 01:25:31.432902  141927 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:25:31.458842  141927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:25:31.472706  141927 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:25:31.472728  141927 kubeadm.go:156] found existing configuration files:
	
	I0420 01:25:31.472780  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0420 01:25:31.486221  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:25:31.486276  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:25:31.500036  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0420 01:25:31.510180  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:25:31.510237  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:25:31.520560  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0420 01:25:31.530333  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:25:31.530387  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:25:31.541053  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0420 01:25:31.551200  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:25:31.551257  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:25:31.561364  141927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:25:31.572967  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:31.690537  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.319980  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.546554  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.631937  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.729738  141927 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:25:32.729838  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.230769  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.730452  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.807772  141927 api_server.go:72] duration metric: took 1.07803345s to wait for apiserver process to appear ...
	I0420 01:25:33.807805  141927 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:25:33.807829  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:33.808551  141927 api_server.go:269] stopped: https://192.168.39.222:8444/healthz: Get "https://192.168.39.222:8444/healthz": dial tcp 192.168.39.222:8444: connect: connection refused
	I0420 01:25:32.342951  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:32.343373  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:32.343420  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:32.343352  143271 retry.go:31] will retry after 1.871100952s: waiting for machine to come up
	I0420 01:25:34.215884  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:34.216313  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:34.216341  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:34.216253  143271 retry.go:31] will retry after 2.017753728s: waiting for machine to come up
	I0420 01:25:36.237296  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:36.237906  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:36.237936  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:36.237856  143271 retry.go:31] will retry after 3.431912056s: waiting for machine to come up
	I0420 01:25:34.308465  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.098889  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:37.098928  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:37.098945  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.149496  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:37.149534  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:37.308936  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.313975  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:37.314005  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:37.808680  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.818747  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:37.818784  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:38.307905  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:38.318528  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:38.318563  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:38.808127  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:38.816135  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:38.816167  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:39.307985  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:39.313712  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:39.313753  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:39.808225  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:39.812825  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:39.812858  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:40.308366  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:40.312930  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:40.312970  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:40.808320  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:40.812979  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 200:
	ok
	I0420 01:25:40.820265  141927 api_server.go:141] control plane version: v1.30.0
	I0420 01:25:40.820289  141927 api_server.go:131] duration metric: took 7.012476869s to wait for apiserver health ...
	I0420 01:25:40.820298  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:25:40.820304  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:40.822367  141927 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:25:39.671070  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:39.671556  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:39.671614  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:39.671502  143271 retry.go:31] will retry after 3.954438708s: waiting for machine to come up
	I0420 01:25:40.823843  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:25:40.837960  141927 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:25:40.858294  141927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:25:40.867542  141927 system_pods.go:59] 8 kube-system pods found
	I0420 01:25:40.867577  141927 system_pods.go:61] "coredns-7db6d8ff4d-7v886" [0e0b3a5f-041a-4bbc-94aa-c9571a8761ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:25:40.867584  141927 system_pods.go:61] "etcd-default-k8s-diff-port-907988" [88f687c4-8865-4fe6-92f1-448cfde6117c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:25:40.867590  141927 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-907988" [2c9f0d90-35c6-45ad-b9b1-9504c55a1e18] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:25:40.867597  141927 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-907988" [949ce449-06b4-4650-8ba0-7567637d6aec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:25:40.867604  141927 system_pods.go:61] "kube-proxy-dg6xn" [1124d9e8-41aa-44a9-8a4a-eafd2cd6c6c9] Running
	I0420 01:25:40.867626  141927 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-907988" [df93de11-c23d-4f5d-afd4-1af7928933fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0420 01:25:40.867640  141927 system_pods.go:61] "metrics-server-569cc877fc-rqqlt" [2c7d91c3-fce8-4603-a7be-8d9b415d71f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:25:40.867647  141927 system_pods.go:61] "storage-provisioner" [af4dc99d-feef-4c24-852a-4c8cad22dd7d] Running
	I0420 01:25:40.867654  141927 system_pods.go:74] duration metric: took 9.33485ms to wait for pod list to return data ...
	I0420 01:25:40.867670  141927 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:25:40.871045  141927 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:25:40.871067  141927 node_conditions.go:123] node cpu capacity is 2
	I0420 01:25:40.871078  141927 node_conditions.go:105] duration metric: took 3.402743ms to run NodePressure ...
	I0420 01:25:40.871094  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:41.142438  141927 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:25:41.151801  141927 kubeadm.go:733] kubelet initialised
	I0420 01:25:41.151822  141927 kubeadm.go:734] duration metric: took 9.359538ms waiting for restarted kubelet to initialise ...
	I0420 01:25:41.151830  141927 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:25:41.160583  141927 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.169184  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.169214  141927 pod_ready.go:81] duration metric: took 8.596607ms for pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.169226  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.169234  141927 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.175518  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.175544  141927 pod_ready.go:81] duration metric: took 6.298273ms for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.175558  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.175567  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.189038  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.189062  141927 pod_ready.go:81] duration metric: took 13.484198ms for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.189072  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.189078  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.261162  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.261191  141927 pod_ready.go:81] duration metric: took 72.106763ms for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.261203  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.261210  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dg6xn" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.662532  141927 pod_ready.go:92] pod "kube-proxy-dg6xn" in "kube-system" namespace has status "Ready":"True"
	I0420 01:25:41.662553  141927 pod_ready.go:81] duration metric: took 401.337101ms for pod "kube-proxy-dg6xn" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.662562  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:43.670281  141927 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:45.122924  142411 start.go:364] duration metric: took 4m11.621269498s to acquireMachinesLock for "old-k8s-version-564860"
	I0420 01:25:45.122996  142411 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:45.123018  142411 fix.go:54] fixHost starting: 
	I0420 01:25:45.123538  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:45.123581  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:45.141340  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0420 01:25:45.141873  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:45.142555  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:25:45.142592  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:45.142979  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:45.143234  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:25:45.143426  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetState
	I0420 01:25:45.145067  142411 fix.go:112] recreateIfNeeded on old-k8s-version-564860: state=Stopped err=<nil>
	I0420 01:25:45.145114  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	W0420 01:25:45.145289  142411 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:45.147498  142411 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-564860" ...
	I0420 01:25:43.630616  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.631126  142057 main.go:141] libmachine: (embed-certs-269507) Found IP for machine: 192.168.50.184
	I0420 01:25:43.631159  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has current primary IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.631173  142057 main.go:141] libmachine: (embed-certs-269507) Reserving static IP address...
	I0420 01:25:43.631625  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "embed-certs-269507", mac: "52:54:00:5d:0f:ba", ip: "192.168.50.184"} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.631677  142057 main.go:141] libmachine: (embed-certs-269507) DBG | skip adding static IP to network mk-embed-certs-269507 - found existing host DHCP lease matching {name: "embed-certs-269507", mac: "52:54:00:5d:0f:ba", ip: "192.168.50.184"}
	I0420 01:25:43.631692  142057 main.go:141] libmachine: (embed-certs-269507) Reserved static IP address: 192.168.50.184
	I0420 01:25:43.631710  142057 main.go:141] libmachine: (embed-certs-269507) Waiting for SSH to be available...
	I0420 01:25:43.631731  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Getting to WaitForSSH function...
	I0420 01:25:43.634292  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.634614  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.634650  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.634833  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Using SSH client type: external
	I0420 01:25:43.634883  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa (-rw-------)
	I0420 01:25:43.634916  142057 main.go:141] libmachine: (embed-certs-269507) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:25:43.634935  142057 main.go:141] libmachine: (embed-certs-269507) DBG | About to run SSH command:
	I0420 01:25:43.634949  142057 main.go:141] libmachine: (embed-certs-269507) DBG | exit 0
	I0420 01:25:43.757712  142057 main.go:141] libmachine: (embed-certs-269507) DBG | SSH cmd err, output: <nil>: 
	I0420 01:25:43.758118  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetConfigRaw
	I0420 01:25:43.758820  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:43.761626  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.762007  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.762083  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.762328  142057 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/config.json ...
	I0420 01:25:43.762556  142057 machine.go:94] provisionDockerMachine start ...
	I0420 01:25:43.762575  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:43.762827  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:43.765841  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.766277  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.766304  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.766461  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:43.766636  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.766766  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.766884  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:43.767111  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:43.767371  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:43.767386  142057 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:25:43.874709  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:25:43.874741  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:43.875018  142057 buildroot.go:166] provisioning hostname "embed-certs-269507"
	I0420 01:25:43.875052  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:43.875265  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:43.878226  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.878645  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.878675  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.878767  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:43.878976  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.879120  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.879246  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:43.879375  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:43.879585  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:43.879613  142057 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-269507 && echo "embed-certs-269507" | sudo tee /etc/hostname
	I0420 01:25:44.003458  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-269507
	
	I0420 01:25:44.003502  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.006277  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.006706  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.006745  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.006922  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.007227  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.007417  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.007604  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.007772  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:44.007959  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:44.007979  142057 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-269507' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-269507/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-269507' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:25:44.124457  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:44.124494  142057 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:25:44.124516  142057 buildroot.go:174] setting up certificates
	I0420 01:25:44.124526  142057 provision.go:84] configureAuth start
	I0420 01:25:44.124537  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:44.124850  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:44.127589  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.127958  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.127980  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.128196  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.130485  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.130792  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.130830  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.130992  142057 provision.go:143] copyHostCerts
	I0420 01:25:44.131060  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:25:44.131075  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:25:44.131132  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:25:44.131237  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:25:44.131246  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:25:44.131266  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:25:44.131326  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:25:44.131333  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:25:44.131349  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:25:44.131397  142057 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.embed-certs-269507 san=[127.0.0.1 192.168.50.184 embed-certs-269507 localhost minikube]
	I0420 01:25:44.404404  142057 provision.go:177] copyRemoteCerts
	I0420 01:25:44.404469  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:25:44.404498  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.407318  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.407650  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.407683  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.407850  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.408033  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.408182  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.408307  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:44.498069  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:25:44.524979  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0420 01:25:44.553537  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 01:25:44.580307  142057 provision.go:87] duration metric: took 455.767679ms to configureAuth
	I0420 01:25:44.580332  142057 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:25:44.580609  142057 config.go:182] Loaded profile config "embed-certs-269507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:25:44.580722  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.583352  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.583728  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.583761  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.583978  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.584205  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.584383  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.584516  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.584715  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:44.584905  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:44.584926  142057 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:25:44.882565  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:25:44.882599  142057 machine.go:97] duration metric: took 1.120028956s to provisionDockerMachine
	I0420 01:25:44.882612  142057 start.go:293] postStartSetup for "embed-certs-269507" (driver="kvm2")
	I0420 01:25:44.882622  142057 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:25:44.882639  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:44.882971  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:25:44.883012  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.885829  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.886181  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.886208  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.886372  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.886598  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.886761  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.886915  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:44.972428  142057 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:25:44.977228  142057 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:25:44.977257  142057 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:25:44.977344  142057 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:25:44.977435  142057 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:25:44.977552  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:25:44.987372  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:45.014435  142057 start.go:296] duration metric: took 131.807177ms for postStartSetup
	I0420 01:25:45.014484  142057 fix.go:56] duration metric: took 20.699839101s for fixHost
	I0420 01:25:45.014512  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.017361  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.017768  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.017795  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.017943  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.018150  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.018302  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.018421  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.018643  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:45.018815  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:45.018827  142057 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:25:45.122766  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576345.101529100
	
	I0420 01:25:45.122788  142057 fix.go:216] guest clock: 1713576345.101529100
	I0420 01:25:45.122796  142057 fix.go:229] Guest: 2024-04-20 01:25:45.1015291 +0000 UTC Remote: 2024-04-20 01:25:45.014489313 +0000 UTC m=+293.764572165 (delta=87.039787ms)
	I0420 01:25:45.122823  142057 fix.go:200] guest clock delta is within tolerance: 87.039787ms
	I0420 01:25:45.122828  142057 start.go:83] releasing machines lock for "embed-certs-269507", held for 20.808247089s
	I0420 01:25:45.122851  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.123156  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:45.125956  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.126377  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.126408  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.126536  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127059  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127264  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127349  142057 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:25:45.127404  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.127470  142057 ssh_runner.go:195] Run: cat /version.json
	I0420 01:25:45.127497  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.130071  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130393  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130427  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.130447  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130727  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.130825  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.130854  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130932  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.131041  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.131115  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.131220  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:45.131301  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.131451  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.131597  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:45.211824  142057 ssh_runner.go:195] Run: systemctl --version
	I0420 01:25:45.236425  142057 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:25:45.383069  142057 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:25:45.391072  142057 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:25:45.391159  142057 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:25:45.410287  142057 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:25:45.410313  142057 start.go:494] detecting cgroup driver to use...
	I0420 01:25:45.410395  142057 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:25:45.433663  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:25:45.452933  142057 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:25:45.452999  142057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:25:45.473208  142057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:25:45.493261  142057 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:25:45.650111  142057 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:25:45.847482  142057 docker.go:233] disabling docker service ...
	I0420 01:25:45.847559  142057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:25:45.871032  142057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:25:45.892747  142057 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:25:46.076222  142057 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:25:46.218078  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:25:46.236006  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:25:46.259279  142057 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:25:46.259363  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.272573  142057 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:25:46.272647  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.286468  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.298708  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.313197  142057 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:25:46.332844  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.345531  142057 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.367686  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.379702  142057 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:25:46.390491  142057 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:25:46.390558  142057 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:25:46.406027  142057 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:25:46.417370  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:46.543690  142057 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:25:46.725507  142057 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:25:46.725599  142057 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:25:46.734173  142057 start.go:562] Will wait 60s for crictl version
	I0420 01:25:46.734246  142057 ssh_runner.go:195] Run: which crictl
	I0420 01:25:46.740381  142057 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:25:46.801341  142057 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:25:46.801431  142057 ssh_runner.go:195] Run: crio --version
	I0420 01:25:46.843121  142057 ssh_runner.go:195] Run: crio --version
	I0420 01:25:46.889958  142057 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:25:45.148885  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .Start
	I0420 01:25:45.149115  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring networks are active...
	I0420 01:25:45.149856  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring network default is active
	I0420 01:25:45.150205  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring network mk-old-k8s-version-564860 is active
	I0420 01:25:45.150615  142411 main.go:141] libmachine: (old-k8s-version-564860) Getting domain xml...
	I0420 01:25:45.151296  142411 main.go:141] libmachine: (old-k8s-version-564860) Creating domain...
	I0420 01:25:46.465532  142411 main.go:141] libmachine: (old-k8s-version-564860) Waiting to get IP...
	I0420 01:25:46.466816  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.467306  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.467383  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.467288  143434 retry.go:31] will retry after 265.980653ms: waiting for machine to come up
	I0420 01:25:46.735144  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.735676  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.735700  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.735627  143434 retry.go:31] will retry after 254.534112ms: waiting for machine to come up
	I0420 01:25:46.992222  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.992707  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.992738  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.992621  143434 retry.go:31] will retry after 434.179962ms: waiting for machine to come up
	I0420 01:25:47.428397  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:47.428949  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:47.428987  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:47.428899  143434 retry.go:31] will retry after 533.143168ms: waiting for machine to come up
	I0420 01:25:47.963467  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:47.964008  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:47.964035  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:47.963957  143434 retry.go:31] will retry after 601.536298ms: waiting for machine to come up
	I0420 01:25:45.675159  141927 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:48.175457  141927 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:25:48.175487  141927 pod_ready.go:81] duration metric: took 6.512916578s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:48.175499  141927 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:46.891233  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:46.894647  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:46.895107  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:46.895170  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:46.895398  142057 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0420 01:25:46.900604  142057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:46.920025  142057 kubeadm.go:877] updating cluster {Name:embed-certs-269507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:25:46.920184  142057 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:25:46.920247  142057 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:46.967086  142057 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:25:46.967171  142057 ssh_runner.go:195] Run: which lz4
	I0420 01:25:46.973391  142057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:25:46.979210  142057 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:25:46.979241  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:25:48.806615  142057 crio.go:462] duration metric: took 1.83326325s to copy over tarball
	I0420 01:25:48.806701  142057 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:25:48.567922  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:48.568436  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:48.568469  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:48.568387  143434 retry.go:31] will retry after 853.809635ms: waiting for machine to come up
	I0420 01:25:49.423590  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:49.424154  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:49.424178  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:49.424099  143434 retry.go:31] will retry after 1.096859163s: waiting for machine to come up
	I0420 01:25:50.522906  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:50.523406  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:50.523436  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:50.523350  143434 retry.go:31] will retry after 983.057252ms: waiting for machine to come up
	I0420 01:25:51.508033  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:51.508557  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:51.508596  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:51.508497  143434 retry.go:31] will retry after 1.463876638s: waiting for machine to come up
	I0420 01:25:52.974032  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:52.974508  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:52.974536  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:52.974459  143434 retry.go:31] will retry after 1.859889372s: waiting for machine to come up
	I0420 01:25:50.183489  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:53.262055  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:51.389972  142057 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.583237436s)
	I0420 01:25:51.390002  142057 crio.go:469] duration metric: took 2.583356337s to extract the tarball
	I0420 01:25:51.390010  142057 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:25:51.434741  142057 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:51.489945  142057 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:25:51.489974  142057 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:25:51.489984  142057 kubeadm.go:928] updating node { 192.168.50.184 8443 v1.30.0 crio true true} ...
	I0420 01:25:51.490126  142057 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-269507 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:25:51.490226  142057 ssh_runner.go:195] Run: crio config
	I0420 01:25:51.548273  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:25:51.548299  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:51.548316  142057 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:25:51.548356  142057 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-269507 NodeName:embed-certs-269507 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:25:51.548534  142057 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-269507"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:25:51.548614  142057 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:25:51.560359  142057 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:25:51.560428  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:25:51.571609  142057 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0420 01:25:51.594462  142057 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:25:51.621417  142057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0420 01:25:51.649250  142057 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0420 01:25:51.655304  142057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:51.675476  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:51.809652  142057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:25:51.829341  142057 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507 for IP: 192.168.50.184
	I0420 01:25:51.829405  142057 certs.go:194] generating shared ca certs ...
	I0420 01:25:51.829430  142057 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:25:51.829627  142057 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:25:51.829687  142057 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:25:51.829697  142057 certs.go:256] generating profile certs ...
	I0420 01:25:51.829823  142057 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/client.key
	I0420 01:25:52.088423  142057 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.key.c1e63643
	I0420 01:25:52.088542  142057 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.key
	I0420 01:25:52.088748  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:25:52.088811  142057 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:25:52.088841  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:25:52.088880  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:25:52.088919  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:25:52.088959  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:25:52.089020  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:52.090046  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:25:52.130739  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:25:52.163426  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:25:52.202470  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:25:52.232070  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0420 01:25:52.265640  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:25:52.305670  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:25:52.336788  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:25:52.371507  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:25:52.403015  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:25:52.433761  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:25:52.461373  142057 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:25:52.480675  142057 ssh_runner.go:195] Run: openssl version
	I0420 01:25:52.486965  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:25:52.499466  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.506355  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.506409  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.514625  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:25:52.530107  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:25:52.544051  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.549426  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.549495  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.555960  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:25:52.569332  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:25:52.583057  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.588323  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.588390  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.594622  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:25:52.607021  142057 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:25:52.612270  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:25:52.619182  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:25:52.626168  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:25:52.633276  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:25:52.639840  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:25:52.646478  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:25:52.652982  142057 kubeadm.go:391] StartCluster: {Name:embed-certs-269507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:25:52.653130  142057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:25:52.653182  142057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:52.699113  142057 cri.go:89] found id: ""
	I0420 01:25:52.699200  142057 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:25:52.712835  142057 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:25:52.712859  142057 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:25:52.712867  142057 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:25:52.712914  142057 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:25:52.726130  142057 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:25:52.727354  142057 kubeconfig.go:125] found "embed-certs-269507" server: "https://192.168.50.184:8443"
	I0420 01:25:52.729600  142057 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:25:52.744185  142057 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.184
	I0420 01:25:52.744217  142057 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:25:52.744231  142057 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:25:52.744292  142057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:52.792889  142057 cri.go:89] found id: ""
	I0420 01:25:52.792967  142057 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:25:52.812771  142057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:25:52.824478  142057 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:25:52.824495  142057 kubeadm.go:156] found existing configuration files:
	
	I0420 01:25:52.824533  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:25:52.835612  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:25:52.835679  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:25:52.847089  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:25:52.858049  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:25:52.858126  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:25:52.872787  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:25:52.886588  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:25:52.886649  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:25:52.899467  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:25:52.910884  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:25:52.910942  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:25:52.922217  142057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:25:52.933432  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:53.108167  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.044709  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.257949  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.327450  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.426738  142057 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:25:54.426849  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:54.926955  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:55.427198  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:55.489075  142057 api_server.go:72] duration metric: took 1.06233038s to wait for apiserver process to appear ...
	I0420 01:25:55.489109  142057 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:25:55.489137  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:55.489682  142057 api_server.go:269] stopped: https://192.168.50.184:8443/healthz: Get "https://192.168.50.184:8443/healthz": dial tcp 192.168.50.184:8443: connect: connection refused
	I0420 01:25:55.989278  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:54.836137  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:54.836639  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:54.836670  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:54.836584  143434 retry.go:31] will retry after 2.172259495s: waiting for machine to come up
	I0420 01:25:57.011412  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:57.011810  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:57.011840  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:57.011782  143434 retry.go:31] will retry after 2.279304552s: waiting for machine to come up
	I0420 01:25:55.684205  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:57.686312  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:58.334562  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:58.334594  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:58.334614  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.344779  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:58.344814  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:58.490111  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.499158  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:58.499194  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:58.989417  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.996443  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:58.996477  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:59.489585  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:59.496235  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:59.496271  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:59.989892  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:59.994154  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0420 01:26:00.000276  142057 api_server.go:141] control plane version: v1.30.0
	I0420 01:26:00.000301  142057 api_server.go:131] duration metric: took 4.511183577s to wait for apiserver health ...
	I0420 01:26:00.000311  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:26:00.000317  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:00.002217  142057 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:26:00.003646  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:26:00.018114  142057 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:26:00.040866  142057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:26:00.050481  142057 system_pods.go:59] 8 kube-system pods found
	I0420 01:26:00.050514  142057 system_pods.go:61] "coredns-7db6d8ff4d-79bzc" [af5f0029-75b5-4131-8c60-5a4fee48c618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:26:00.050524  142057 system_pods.go:61] "etcd-embed-certs-269507" [d6dfc301-0cfb-4bfb-99f7-948b77b38f53] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:26:00.050533  142057 system_pods.go:61] "kube-apiserver-embed-certs-269507" [915deee2-f571-4337-bcdc-07f40d06b9c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:26:00.050539  142057 system_pods.go:61] "kube-controller-manager-embed-certs-269507" [21c885b0-6d1b-4593-87f3-141e512af7dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:26:00.050545  142057 system_pods.go:61] "kube-proxy-crzk6" [d5972e9a-15cd-4b62-90d5-c10bdfa20989] Running
	I0420 01:26:00.050553  142057 system_pods.go:61] "kube-scheduler-embed-certs-269507" [1e556102-d4c9-494c-baf2-ab7e62d7d1e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0420 01:26:00.050559  142057 system_pods.go:61] "metrics-server-569cc877fc-8s79l" [1dc06e4a-3f47-4ef1-8757-81262c52fe55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:26:00.050583  142057 system_pods.go:61] "storage-provisioner" [f7b03907-0042-48d8-981b-1b8e665d58e7] Running
	I0420 01:26:00.050600  142057 system_pods.go:74] duration metric: took 9.699819ms to wait for pod list to return data ...
	I0420 01:26:00.050608  142057 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:26:00.053915  142057 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:26:00.053964  142057 node_conditions.go:123] node cpu capacity is 2
	I0420 01:26:00.053975  142057 node_conditions.go:105] duration metric: took 3.363162ms to run NodePressure ...
	I0420 01:26:00.053994  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:00.327736  142057 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:26:00.332409  142057 kubeadm.go:733] kubelet initialised
	I0420 01:26:00.332434  142057 kubeadm.go:734] duration metric: took 4.671334ms waiting for restarted kubelet to initialise ...
	I0420 01:26:00.332446  142057 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:26:00.338296  142057 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:59.292382  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:59.292905  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:59.292939  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:59.292852  143434 retry.go:31] will retry after 4.056028382s: waiting for machine to come up
	I0420 01:26:03.350591  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:03.351022  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:26:03.351047  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:26:03.350978  143434 retry.go:31] will retry after 5.38819739s: waiting for machine to come up
	I0420 01:26:00.184338  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:02.684685  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:02.345607  142057 pod_ready.go:102] pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:03.850887  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:03.850915  142057 pod_ready.go:81] duration metric: took 3.512592061s for pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:03.850929  142057 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:05.857665  142057 pod_ready.go:102] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:05.183082  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:07.682906  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:10.191165  141746 start.go:364] duration metric: took 1m1.9514957s to acquireMachinesLock for "no-preload-338118"
	I0420 01:26:10.191222  141746 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:26:10.191235  141746 fix.go:54] fixHost starting: 
	I0420 01:26:10.191624  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:26:10.191668  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:26:10.212169  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34829
	I0420 01:26:10.212568  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:26:10.213074  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:26:10.213120  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:26:10.213524  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:26:10.213755  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:10.213957  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:26:10.215578  141746 fix.go:112] recreateIfNeeded on no-preload-338118: state=Stopped err=<nil>
	I0420 01:26:10.215604  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	W0420 01:26:10.215788  141746 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:26:10.217632  141746 out.go:177] * Restarting existing kvm2 VM for "no-preload-338118" ...
	I0420 01:26:10.218915  141746 main.go:141] libmachine: (no-preload-338118) Calling .Start
	I0420 01:26:10.219094  141746 main.go:141] libmachine: (no-preload-338118) Ensuring networks are active...
	I0420 01:26:10.219820  141746 main.go:141] libmachine: (no-preload-338118) Ensuring network default is active
	I0420 01:26:10.220181  141746 main.go:141] libmachine: (no-preload-338118) Ensuring network mk-no-preload-338118 is active
	I0420 01:26:10.220584  141746 main.go:141] libmachine: (no-preload-338118) Getting domain xml...
	I0420 01:26:10.221275  141746 main.go:141] libmachine: (no-preload-338118) Creating domain...
	I0420 01:26:08.363522  142057 pod_ready.go:102] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:09.858701  142057 pod_ready.go:92] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:09.858731  142057 pod_ready.go:81] duration metric: took 6.007793209s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:09.858742  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:08.743367  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.743867  142411 main.go:141] libmachine: (old-k8s-version-564860) Found IP for machine: 192.168.61.91
	I0420 01:26:08.743896  142411 main.go:141] libmachine: (old-k8s-version-564860) Reserving static IP address...
	I0420 01:26:08.743914  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has current primary IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.744294  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "old-k8s-version-564860", mac: "52:54:00:9d:63:09", ip: "192.168.61.91"} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.744324  142411 main.go:141] libmachine: (old-k8s-version-564860) Reserved static IP address: 192.168.61.91
	I0420 01:26:08.744344  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | skip adding static IP to network mk-old-k8s-version-564860 - found existing host DHCP lease matching {name: "old-k8s-version-564860", mac: "52:54:00:9d:63:09", ip: "192.168.61.91"}
	I0420 01:26:08.744368  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Getting to WaitForSSH function...
	I0420 01:26:08.744387  142411 main.go:141] libmachine: (old-k8s-version-564860) Waiting for SSH to be available...
	I0420 01:26:08.746714  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.747119  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.747155  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.747278  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Using SSH client type: external
	I0420 01:26:08.747314  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa (-rw-------)
	I0420 01:26:08.747346  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:26:08.747359  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | About to run SSH command:
	I0420 01:26:08.747373  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | exit 0
	I0420 01:26:08.877633  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | SSH cmd err, output: <nil>: 
	I0420 01:26:08.878016  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetConfigRaw
	I0420 01:26:08.878715  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:08.881556  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.881982  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.882028  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.882326  142411 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/config.json ...
	I0420 01:26:08.882586  142411 machine.go:94] provisionDockerMachine start ...
	I0420 01:26:08.882613  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:08.882853  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:08.885133  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.885479  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.885510  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.885647  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:08.885843  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:08.886029  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:08.886192  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:08.886403  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:08.886642  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:08.886657  142411 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:26:09.006625  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:26:09.006655  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.006914  142411 buildroot.go:166] provisioning hostname "old-k8s-version-564860"
	I0420 01:26:09.006940  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.007144  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.010016  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.010349  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.010374  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.010597  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.010841  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.011040  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.011235  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.011439  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.011682  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.011718  142411 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-564860 && echo "old-k8s-version-564860" | sudo tee /etc/hostname
	I0420 01:26:09.155581  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-564860
	
	I0420 01:26:09.155612  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.158583  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.159021  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.159068  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.159285  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.159519  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.159747  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.159933  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.160128  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.160362  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.160390  142411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-564860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-564860/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-564860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:26:09.288804  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:26:09.288834  142411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:26:09.288856  142411 buildroot.go:174] setting up certificates
	I0420 01:26:09.288867  142411 provision.go:84] configureAuth start
	I0420 01:26:09.288877  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.289286  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:09.292454  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.292884  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.292923  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.293076  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.295234  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.295537  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.295565  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.295675  142411 provision.go:143] copyHostCerts
	I0420 01:26:09.295747  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:26:09.295758  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:26:09.295811  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:26:09.295936  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:26:09.295951  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:26:09.295981  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:26:09.296063  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:26:09.296075  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:26:09.296095  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:26:09.296154  142411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-564860 san=[127.0.0.1 192.168.61.91 localhost minikube old-k8s-version-564860]
	I0420 01:26:09.436313  142411 provision.go:177] copyRemoteCerts
	I0420 01:26:09.436373  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:26:09.436401  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.439316  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.439700  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.439743  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.439856  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.440057  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.440226  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.440360  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:09.529141  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:26:09.558376  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0420 01:26:09.586393  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:26:09.615274  142411 provision.go:87] duration metric: took 326.393984ms to configureAuth
	I0420 01:26:09.615300  142411 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:26:09.615501  142411 config.go:182] Loaded profile config "old-k8s-version-564860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:26:09.615590  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.618470  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.618905  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.618938  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.619141  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.619325  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.619505  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.619662  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.619862  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.620073  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.620091  142411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:26:09.924929  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:26:09.924958  142411 machine.go:97] duration metric: took 1.042352034s to provisionDockerMachine
	I0420 01:26:09.924973  142411 start.go:293] postStartSetup for "old-k8s-version-564860" (driver="kvm2")
	I0420 01:26:09.924985  142411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:26:09.925021  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:09.925441  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:26:09.925485  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.927985  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.928377  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.928407  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.928565  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.928770  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.928944  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.929114  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.020189  142411 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:26:10.025578  142411 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:26:10.025607  142411 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:26:10.025707  142411 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:26:10.025795  142411 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:26:10.025888  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:26:10.038138  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:10.065063  142411 start.go:296] duration metric: took 140.07164ms for postStartSetup
	I0420 01:26:10.065111  142411 fix.go:56] duration metric: took 24.94209431s for fixHost
	I0420 01:26:10.065139  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.068099  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.068493  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.068544  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.068697  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.068916  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.069114  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.069255  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.069455  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:10.069662  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:10.069678  142411 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:26:10.190955  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576370.174630368
	
	I0420 01:26:10.190984  142411 fix.go:216] guest clock: 1713576370.174630368
	I0420 01:26:10.190994  142411 fix.go:229] Guest: 2024-04-20 01:26:10.174630368 +0000 UTC Remote: 2024-04-20 01:26:10.065116719 +0000 UTC m=+276.709087933 (delta=109.513649ms)
	I0420 01:26:10.191036  142411 fix.go:200] guest clock delta is within tolerance: 109.513649ms
	I0420 01:26:10.191044  142411 start.go:83] releasing machines lock for "old-k8s-version-564860", held for 25.068071712s
	I0420 01:26:10.191074  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.191368  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:10.194872  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.195333  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.195365  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.195510  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196060  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196253  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196331  142411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:26:10.196375  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.196439  142411 ssh_runner.go:195] Run: cat /version.json
	I0420 01:26:10.196467  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.199156  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199522  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199557  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.199572  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199760  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.199975  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.200098  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.200137  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.200165  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.200326  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.200700  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.200857  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.200992  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.201150  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.283430  142411 ssh_runner.go:195] Run: systemctl --version
	I0420 01:26:10.310703  142411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:26:10.462457  142411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:26:10.470897  142411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:26:10.470993  142411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:26:10.489867  142411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:26:10.489899  142411 start.go:494] detecting cgroup driver to use...
	I0420 01:26:10.489996  142411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:26:10.512741  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:26:10.530013  142411 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:26:10.530077  142411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:26:10.548567  142411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:26:10.565645  142411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:26:10.693390  142411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:26:10.878889  142411 docker.go:233] disabling docker service ...
	I0420 01:26:10.878973  142411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:26:10.901233  142411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:26:10.915219  142411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:26:11.053815  142411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:26:11.201766  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:26:11.218569  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:26:11.240543  142411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0420 01:26:11.240604  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.253384  142411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:26:11.253460  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.268703  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.281575  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.296477  142411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:26:11.312458  142411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:26:11.328008  142411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:26:11.328076  142411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:26:11.349027  142411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:26:11.362064  142411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:11.500624  142411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:26:11.665985  142411 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:26:11.666061  142411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:26:11.672929  142411 start.go:562] Will wait 60s for crictl version
	I0420 01:26:11.673006  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:11.678398  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:26:11.727572  142411 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:26:11.727663  142411 ssh_runner.go:195] Run: crio --version
	I0420 01:26:11.760504  142411 ssh_runner.go:195] Run: crio --version
	I0420 01:26:11.803463  142411 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0420 01:26:11.804782  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:11.807755  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:11.808135  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:11.808177  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:11.808396  142411 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0420 01:26:11.813653  142411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:11.830618  142411 kubeadm.go:877] updating cluster {Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:26:11.830793  142411 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:26:11.830874  142411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:11.889149  142411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:26:11.889218  142411 ssh_runner.go:195] Run: which lz4
	I0420 01:26:11.894461  142411 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:26:11.900427  142411 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:26:11.900456  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0420 01:26:10.183110  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:12.184209  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:11.636722  141746 main.go:141] libmachine: (no-preload-338118) Waiting to get IP...
	I0420 01:26:11.637635  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:11.638048  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:11.638135  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:11.638011  143635 retry.go:31] will retry after 264.135122ms: waiting for machine to come up
	I0420 01:26:11.903486  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:11.904008  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:11.904053  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:11.903958  143635 retry.go:31] will retry after 367.952741ms: waiting for machine to come up
	I0420 01:26:12.273951  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:12.274547  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:12.274584  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:12.274491  143635 retry.go:31] will retry after 390.958735ms: waiting for machine to come up
	I0420 01:26:12.667348  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:12.667888  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:12.667915  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:12.667820  143635 retry.go:31] will retry after 554.212994ms: waiting for machine to come up
	I0420 01:26:13.223423  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:13.224158  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:13.224184  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:13.224058  143635 retry.go:31] will retry after 686.102207ms: waiting for machine to come up
	I0420 01:26:13.911430  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:13.912019  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:13.912042  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:13.911968  143635 retry.go:31] will retry after 875.263983ms: waiting for machine to come up
	I0420 01:26:14.788949  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:14.789431  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:14.789481  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:14.789392  143635 retry.go:31] will retry after 847.129796ms: waiting for machine to come up
	I0420 01:26:15.637863  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:15.638348  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:15.638379  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:15.638288  143635 retry.go:31] will retry after 1.162423805s: waiting for machine to come up
	I0420 01:26:11.866297  142057 pod_ready.go:102] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:13.868499  142057 pod_ready.go:102] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:14.867208  142057 pod_ready.go:92] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.867241  142057 pod_ready.go:81] duration metric: took 5.008488667s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.867254  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.875100  142057 pod_ready.go:92] pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.875119  142057 pod_ready.go:81] duration metric: took 7.856647ms for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.875131  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-crzk6" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.880630  142057 pod_ready.go:92] pod "kube-proxy-crzk6" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.880651  142057 pod_ready.go:81] duration metric: took 5.512379ms for pod "kube-proxy-crzk6" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.880661  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.885625  142057 pod_ready.go:92] pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.885645  142057 pod_ready.go:81] duration metric: took 4.976632ms for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.885656  142057 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.031960  142411 crio.go:462] duration metric: took 2.137532848s to copy over tarball
	I0420 01:26:14.032043  142411 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:26:17.581625  142411 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.549548059s)
	I0420 01:26:17.581660  142411 crio.go:469] duration metric: took 3.549666471s to extract the tarball
	I0420 01:26:17.581672  142411 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:26:17.633172  142411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:17.679514  142411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:26:17.679544  142411 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0420 01:26:17.679710  142411 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.679940  142411 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.680051  142411 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.680061  142411 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.680225  142411 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.680266  142411 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0420 01:26:17.680442  142411 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.680516  142411 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.682336  142411 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.682425  142411 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0420 01:26:17.682428  142411 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.682462  142411 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.682341  142411 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.682512  142411 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.682952  142411 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.682955  142411 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.846602  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.850673  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.866812  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.871983  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.876346  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0420 01:26:17.876745  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.881269  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.985788  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.997662  142411 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0420 01:26:17.997709  142411 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0420 01:26:17.997716  142411 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.997751  142411 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.997778  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:17.997797  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.071610  142411 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0420 01:26:18.071682  142411 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:18.071705  142411 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0420 01:26:18.071741  142411 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:18.071760  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.071793  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.085631  142411 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0420 01:26:18.085689  142411 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0420 01:26:18.085748  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.087239  142411 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0420 01:26:18.087288  142411 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:18.087362  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.094891  142411 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0420 01:26:18.094940  142411 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:18.094989  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.232524  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:18.232595  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:18.232613  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0420 01:26:18.232649  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0420 01:26:18.232595  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:18.232682  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:18.232710  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:14.684499  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:17.185481  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:16.802494  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:16.802977  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:16.803009  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:16.802908  143635 retry.go:31] will retry after 1.370900633s: waiting for machine to come up
	I0420 01:26:18.175474  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:18.175996  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:18.176022  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:18.175943  143635 retry.go:31] will retry after 1.698879408s: waiting for machine to come up
	I0420 01:26:19.876437  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:19.876901  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:19.876932  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:19.876843  143635 retry.go:31] will retry after 2.622833508s: waiting for machine to come up
	I0420 01:26:16.894119  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:18.894941  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:18.408724  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0420 01:26:18.408791  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0420 01:26:18.410041  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0420 01:26:18.410136  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0420 01:26:18.424042  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0420 01:26:18.428203  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0420 01:26:18.428295  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0420 01:26:18.450170  142411 cache_images.go:92] duration metric: took 770.600266ms to LoadCachedImages
	W0420 01:26:18.450288  142411 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0420 01:26:18.450305  142411 kubeadm.go:928] updating node { 192.168.61.91 8443 v1.20.0 crio true true} ...
	I0420 01:26:18.450428  142411 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-564860 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:26:18.450522  142411 ssh_runner.go:195] Run: crio config
	I0420 01:26:18.503362  142411 cni.go:84] Creating CNI manager for ""
	I0420 01:26:18.503407  142411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:18.503427  142411 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:26:18.503463  142411 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-564860 NodeName:old-k8s-version-564860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0420 01:26:18.503671  142411 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-564860"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:26:18.503745  142411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0420 01:26:18.516393  142411 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:26:18.516475  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:26:18.529038  142411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0420 01:26:18.550442  142411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:26:18.572012  142411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0420 01:26:18.595682  142411 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I0420 01:26:18.602036  142411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:18.622226  142411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:18.774466  142411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:26:18.795074  142411 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860 for IP: 192.168.61.91
	I0420 01:26:18.795104  142411 certs.go:194] generating shared ca certs ...
	I0420 01:26:18.795125  142411 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:18.795301  142411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:26:18.795342  142411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:26:18.795352  142411 certs.go:256] generating profile certs ...
	I0420 01:26:18.795433  142411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/client.key
	I0420 01:26:18.795487  142411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key.d235183f
	I0420 01:26:18.795524  142411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key
	I0420 01:26:18.795645  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:26:18.795675  142411 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:26:18.795685  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:26:18.795706  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:26:18.795735  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:26:18.795765  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:26:18.795828  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:18.796607  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:26:18.845581  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:26:18.891065  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:26:18.933536  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:26:18.977381  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0420 01:26:19.009816  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:26:19.042053  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:26:19.090614  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:26:19.119554  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:26:19.147545  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:26:19.177775  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:26:19.211008  142411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:26:19.234399  142411 ssh_runner.go:195] Run: openssl version
	I0420 01:26:19.242808  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:26:19.256132  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.261681  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.261739  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.270546  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:26:19.284112  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:26:19.296998  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.302497  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.302551  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.310883  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:26:19.325130  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:26:19.338964  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.344915  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.344986  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.351926  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:26:19.366428  142411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:26:19.372391  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:26:19.379606  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:26:19.386698  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:26:19.395102  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:26:19.401981  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:26:19.409477  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:26:19.416444  142411 kubeadm.go:391] StartCluster: {Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:26:19.416557  142411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:26:19.416600  142411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:19.460782  142411 cri.go:89] found id: ""
	I0420 01:26:19.460884  142411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:26:19.473812  142411 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:26:19.473832  142411 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:26:19.473838  142411 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:26:19.473899  142411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:26:19.486686  142411 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:26:19.487757  142411 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-564860" does not appear in /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:26:19.488411  142411 kubeconfig.go:62] /home/jenkins/minikube-integration/18703-76456/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-564860" cluster setting kubeconfig missing "old-k8s-version-564860" context setting]
	I0420 01:26:19.489438  142411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:19.491237  142411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:26:19.503483  142411 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.91
	I0420 01:26:19.503519  142411 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:26:19.503530  142411 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:26:19.503597  142411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:19.546350  142411 cri.go:89] found id: ""
	I0420 01:26:19.546438  142411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:26:19.568177  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:26:19.580545  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:26:19.580573  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:26:19.580658  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:26:19.592945  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:26:19.593010  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:26:19.605598  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:26:19.617261  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:26:19.617346  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:26:19.629242  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:26:19.640143  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:26:19.640211  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:26:19.654226  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:26:19.666207  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:26:19.666275  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:26:19.678899  142411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:26:19.694374  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:19.845435  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:20.619142  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:20.891265  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:21.020834  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:21.124545  142411 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:26:21.124652  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:21.625462  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:22.125171  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:22.625565  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:23.125077  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:19.685129  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:22.183561  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:22.502227  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:22.502665  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:22.502696  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:22.502603  143635 retry.go:31] will retry after 3.3877716s: waiting for machine to come up
	I0420 01:26:21.392042  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:23.392579  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:25.394230  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:23.625392  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.125446  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.625035  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:25.125592  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:25.624718  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:26.124803  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:26.625420  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:27.125162  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:27.625475  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:28.125637  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.685014  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:27.182545  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:25.891769  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:25.892321  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:25.892353  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:25.892252  143635 retry.go:31] will retry after 3.395760477s: waiting for machine to come up
	I0420 01:26:29.290361  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:29.290858  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:29.290907  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:29.290791  143635 retry.go:31] will retry after 4.86761736s: waiting for machine to come up
	I0420 01:26:27.892903  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:30.392680  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:28.625781  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.125145  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.625647  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:30.125081  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:30.625404  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:31.124753  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:31.625565  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:32.124750  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:32.624841  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:33.125120  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.682707  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:31.682790  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:33.683549  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:34.162306  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.162883  141746 main.go:141] libmachine: (no-preload-338118) Found IP for machine: 192.168.72.89
	I0420 01:26:34.162912  141746 main.go:141] libmachine: (no-preload-338118) Reserving static IP address...
	I0420 01:26:34.162928  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has current primary IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.163266  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "no-preload-338118", mac: "52:54:00:14:65:26", ip: "192.168.72.89"} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.163296  141746 main.go:141] libmachine: (no-preload-338118) Reserved static IP address: 192.168.72.89
	I0420 01:26:34.163316  141746 main.go:141] libmachine: (no-preload-338118) DBG | skip adding static IP to network mk-no-preload-338118 - found existing host DHCP lease matching {name: "no-preload-338118", mac: "52:54:00:14:65:26", ip: "192.168.72.89"}
	I0420 01:26:34.163335  141746 main.go:141] libmachine: (no-preload-338118) DBG | Getting to WaitForSSH function...
	I0420 01:26:34.163350  141746 main.go:141] libmachine: (no-preload-338118) Waiting for SSH to be available...
	I0420 01:26:34.165641  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.165947  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.165967  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.166136  141746 main.go:141] libmachine: (no-preload-338118) DBG | Using SSH client type: external
	I0420 01:26:34.166161  141746 main.go:141] libmachine: (no-preload-338118) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa (-rw-------)
	I0420 01:26:34.166190  141746 main.go:141] libmachine: (no-preload-338118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:26:34.166216  141746 main.go:141] libmachine: (no-preload-338118) DBG | About to run SSH command:
	I0420 01:26:34.166232  141746 main.go:141] libmachine: (no-preload-338118) DBG | exit 0
	I0420 01:26:34.293435  141746 main.go:141] libmachine: (no-preload-338118) DBG | SSH cmd err, output: <nil>: 
	I0420 01:26:34.293789  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetConfigRaw
	I0420 01:26:34.294381  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:34.296958  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.297355  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.297391  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.297670  141746 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/config.json ...
	I0420 01:26:34.297915  141746 machine.go:94] provisionDockerMachine start ...
	I0420 01:26:34.297945  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:34.298191  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.300645  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.301042  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.301068  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.301280  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.301496  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.301719  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.301895  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.302104  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.302272  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.302284  141746 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:26:34.419082  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:26:34.419113  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.419424  141746 buildroot.go:166] provisioning hostname "no-preload-338118"
	I0420 01:26:34.419452  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.419715  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.422630  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.423010  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.423052  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.423212  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.423415  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.423599  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.423716  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.423928  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.424135  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.424149  141746 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-338118 && echo "no-preload-338118" | sudo tee /etc/hostname
	I0420 01:26:34.555223  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-338118
	
	I0420 01:26:34.555254  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.558217  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.558606  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.558643  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.558792  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.558999  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.559241  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.559423  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.559655  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.559827  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.559844  141746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-338118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-338118/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-338118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:26:34.684192  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:26:34.684226  141746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:26:34.684261  141746 buildroot.go:174] setting up certificates
	I0420 01:26:34.684270  141746 provision.go:84] configureAuth start
	I0420 01:26:34.684289  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.684581  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:34.687363  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.687703  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.687733  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.687876  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.690220  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.690542  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.690569  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.690739  141746 provision.go:143] copyHostCerts
	I0420 01:26:34.690806  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:26:34.690817  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:26:34.690869  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:26:34.691006  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:26:34.691017  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:26:34.691038  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:26:34.691103  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:26:34.691111  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:26:34.691130  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:26:34.691178  141746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.no-preload-338118 san=[127.0.0.1 192.168.72.89 localhost minikube no-preload-338118]
	I0420 01:26:34.899595  141746 provision.go:177] copyRemoteCerts
	I0420 01:26:34.899652  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:26:34.899676  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.902298  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.902745  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.902777  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.902956  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.903150  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.903309  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.903457  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:34.993263  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:26:35.024837  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0420 01:26:35.054254  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 01:26:35.082455  141746 provision.go:87] duration metric: took 398.171071ms to configureAuth
	I0420 01:26:35.082488  141746 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:26:35.082741  141746 config.go:182] Loaded profile config "no-preload-338118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:26:35.082822  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.085868  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.086264  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.086313  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.086481  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.086708  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.086868  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.087051  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.087254  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:35.087424  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:35.087440  141746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:26:35.374277  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:26:35.374305  141746 machine.go:97] duration metric: took 1.076369907s to provisionDockerMachine
	I0420 01:26:35.374327  141746 start.go:293] postStartSetup for "no-preload-338118" (driver="kvm2")
	I0420 01:26:35.374342  141746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:26:35.374366  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.374733  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:26:35.374787  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.378647  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.378998  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.379038  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.379149  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.379353  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.379518  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.379694  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:35.468711  141746 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:26:35.473783  141746 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:26:35.473808  141746 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:26:35.473929  141746 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:26:35.474088  141746 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:26:35.474217  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:26:35.484161  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:35.511695  141746 start.go:296] duration metric: took 137.354669ms for postStartSetup
	I0420 01:26:35.511751  141746 fix.go:56] duration metric: took 25.320502022s for fixHost
	I0420 01:26:35.511780  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.514635  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.515042  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.515067  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.515247  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.515448  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.515663  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.515814  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.515988  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:35.516218  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:35.516240  141746 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:26:35.632029  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576395.615634246
	
	I0420 01:26:35.632057  141746 fix.go:216] guest clock: 1713576395.615634246
	I0420 01:26:35.632067  141746 fix.go:229] Guest: 2024-04-20 01:26:35.615634246 +0000 UTC Remote: 2024-04-20 01:26:35.511757232 +0000 UTC m=+369.861721674 (delta=103.877014ms)
	I0420 01:26:35.632113  141746 fix.go:200] guest clock delta is within tolerance: 103.877014ms
	I0420 01:26:35.632137  141746 start.go:83] releasing machines lock for "no-preload-338118", held for 25.440933699s
	I0420 01:26:35.632168  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.632486  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:35.635888  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.636400  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.636440  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.636751  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637250  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637448  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637547  141746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:26:35.637597  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.637694  141746 ssh_runner.go:195] Run: cat /version.json
	I0420 01:26:35.637720  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.640562  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.640800  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.640953  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.640969  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.641244  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.641389  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.641433  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.641486  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.641644  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.641670  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.641806  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:35.641873  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.641997  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.642163  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:32.892859  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:34.893134  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:35.749528  141746 ssh_runner.go:195] Run: systemctl --version
	I0420 01:26:35.756960  141746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:26:35.912075  141746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:26:35.920264  141746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:26:35.920355  141746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:26:35.937729  141746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:26:35.937753  141746 start.go:494] detecting cgroup driver to use...
	I0420 01:26:35.937811  141746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:26:35.954425  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:26:35.970967  141746 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:26:35.971023  141746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:26:35.986186  141746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:26:36.000803  141746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:26:36.114673  141746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:26:36.273386  141746 docker.go:233] disabling docker service ...
	I0420 01:26:36.273472  141746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:26:36.290471  141746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:26:36.305722  141746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:26:36.459528  141746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:26:36.609105  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:26:36.627255  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:26:36.651459  141746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:26:36.651535  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.663171  141746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:26:36.663255  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.674706  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.686196  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.697909  141746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:26:36.709625  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.720746  141746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.740333  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.752898  141746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:26:36.764600  141746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:26:36.764653  141746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:26:36.780697  141746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:26:36.791440  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:36.936761  141746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:26:37.095374  141746 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:26:37.095475  141746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:26:37.101601  141746 start.go:562] Will wait 60s for crictl version
	I0420 01:26:37.101673  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.106191  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:26:37.152257  141746 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:26:37.152361  141746 ssh_runner.go:195] Run: crio --version
	I0420 01:26:37.187172  141746 ssh_runner.go:195] Run: crio --version
	I0420 01:26:37.225203  141746 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:26:33.625596  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:34.124972  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:34.624791  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:35.125630  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:35.624815  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.125677  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.625631  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:37.125592  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:37.624883  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:38.124924  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.183893  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:38.184381  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:37.226708  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:37.229679  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:37.230090  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:37.230131  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:37.230253  141746 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0420 01:26:37.234914  141746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:37.249029  141746 kubeadm.go:877] updating cluster {Name:no-preload-338118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:26:37.249155  141746 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:26:37.249208  141746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:37.287235  141746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:26:37.287270  141746 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0420 01:26:37.287341  141746 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.287379  141746 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.287387  141746 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.287363  141746 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.287414  141746 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.287378  141746 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.287399  141746 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.287365  141746 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0420 01:26:37.288833  141746 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.288849  141746 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.288863  141746 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.288922  141746 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.288933  141746 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.288831  141746 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.288957  141746 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0420 01:26:37.288985  141746 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.452705  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.462178  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.463495  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0420 01:26:37.469562  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.480726  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.501069  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.517291  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.533934  141746 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0420 01:26:37.533976  141746 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.534032  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.578341  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.602332  141746 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0420 01:26:37.602381  141746 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.602432  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.718979  141746 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0420 01:26:37.719028  141746 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0420 01:26:37.719065  141746 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0420 01:26:37.719093  141746 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.719100  141746 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0420 01:26:37.719126  141746 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.719153  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719220  141746 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0420 01:26:37.719256  141746 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.719067  141746 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.719155  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719306  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.719309  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719036  141746 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.719369  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719154  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.719297  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.733974  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.802462  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.802496  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.802544  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0420 01:26:37.802575  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0420 01:26:37.802637  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.802648  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:37.802648  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.802708  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.802725  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0420 01:26:37.802788  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:37.897150  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0420 01:26:37.897190  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0420 01:26:37.897259  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:37.897268  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0420 01:26:37.897278  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:37.897285  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.897295  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0420 01:26:37.897337  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.902046  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0420 01:26:37.902094  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0420 01:26:37.902151  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:37.902307  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0420 01:26:37.902399  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:37.914016  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0420 01:26:40.184815  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.287511777s)
	I0420 01:26:40.184859  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0420 01:26:40.184918  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.282742718s)
	I0420 01:26:40.184951  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.282534359s)
	I0420 01:26:40.184974  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0420 01:26:40.184981  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0420 01:26:40.185052  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.287690505s)
	I0420 01:26:40.185081  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0420 01:26:40.185113  141746 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:40.185175  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:37.392757  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:39.394094  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:38.624766  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:39.125330  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:39.624953  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.125409  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.625125  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:41.125460  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:41.625041  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:42.125103  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:42.624948  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:43.125237  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.186531  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:42.683524  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:42.252666  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.067465398s)
	I0420 01:26:42.252710  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0420 01:26:42.252735  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:42.252774  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:44.616564  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.363755421s)
	I0420 01:26:44.616614  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0420 01:26:44.616649  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:44.616713  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:41.394300  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:43.895493  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:43.625155  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:44.124986  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:44.624957  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.125834  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.625359  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:46.125706  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:46.625115  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:47.125204  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:47.625746  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:48.124803  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.183628  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:47.684002  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:46.894590  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.277850916s)
	I0420 01:26:46.894626  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0420 01:26:46.894655  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:46.894712  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:49.158327  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.263583483s)
	I0420 01:26:49.158370  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0420 01:26:49.158406  141746 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:49.158478  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:50.223297  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.06478687s)
	I0420 01:26:50.223344  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0420 01:26:50.223382  141746 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:50.223452  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:46.393020  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:48.394414  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:50.893840  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:48.624957  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:49.125441  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:49.625078  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.124787  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.624817  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:51.125211  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:51.625408  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:52.124903  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:52.624826  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:53.124728  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.183173  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:52.183563  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:54.187354  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.963876859s)
	I0420 01:26:54.187388  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0420 01:26:54.187416  141746 cache_images.go:123] Successfully loaded all cached images
	I0420 01:26:54.187426  141746 cache_images.go:92] duration metric: took 16.900140079s to LoadCachedImages
	I0420 01:26:54.187439  141746 kubeadm.go:928] updating node { 192.168.72.89 8443 v1.30.0 crio true true} ...
	I0420 01:26:54.187545  141746 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-338118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:26:54.187608  141746 ssh_runner.go:195] Run: crio config
	I0420 01:26:54.245888  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:26:54.245914  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:54.245928  141746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:26:54.245954  141746 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.89 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-338118 NodeName:no-preload-338118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:26:54.246153  141746 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-338118"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:26:54.246232  141746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:26:54.259262  141746 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:26:54.259360  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:26:54.270769  141746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0420 01:26:54.290436  141746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:26:54.311846  141746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0420 01:26:54.332517  141746 ssh_runner.go:195] Run: grep 192.168.72.89	control-plane.minikube.internal$ /etc/hosts
	I0420 01:26:54.336874  141746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:54.350084  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:54.466328  141746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:26:54.484511  141746 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118 for IP: 192.168.72.89
	I0420 01:26:54.484545  141746 certs.go:194] generating shared ca certs ...
	I0420 01:26:54.484609  141746 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:54.484846  141746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:26:54.484960  141746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:26:54.484996  141746 certs.go:256] generating profile certs ...
	I0420 01:26:54.485165  141746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/client.key
	I0420 01:26:54.485273  141746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.key.f8d917a4
	I0420 01:26:54.485353  141746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.key
	I0420 01:26:54.485543  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:26:54.485604  141746 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:26:54.485622  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:26:54.485667  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:26:54.485707  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:26:54.485741  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:26:54.485804  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:54.486486  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:26:54.539867  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:26:54.575443  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:26:54.609857  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:26:54.638338  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0420 01:26:54.672043  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:26:54.704197  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:26:54.733771  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0420 01:26:54.761911  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:26:54.789278  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:26:54.816890  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:26:54.845884  141746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:26:54.864508  141746 ssh_runner.go:195] Run: openssl version
	I0420 01:26:54.870717  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:26:54.883192  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.888532  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.888588  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.895258  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:26:54.907346  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:26:54.919360  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.924700  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.924773  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.931133  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:26:54.942845  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:26:54.954785  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.959769  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.959856  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.966061  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:26:54.978389  141746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:26:54.983591  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:26:54.990157  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:26:54.996977  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:26:55.004103  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:26:55.010928  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:26:55.018024  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:26:55.024639  141746 kubeadm.go:391] StartCluster: {Name:no-preload-338118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:26:55.024733  141746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:26:55.024784  141746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:55.073888  141746 cri.go:89] found id: ""
	I0420 01:26:55.073954  141746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:26:55.087179  141746 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:26:55.087199  141746 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:26:55.087208  141746 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:26:55.087255  141746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:26:55.098975  141746 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:26:55.100487  141746 kubeconfig.go:125] found "no-preload-338118" server: "https://192.168.72.89:8443"
	I0420 01:26:55.103557  141746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:26:55.114871  141746 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.89
	I0420 01:26:55.114900  141746 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:26:55.114914  141746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:26:55.114983  141746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:55.174863  141746 cri.go:89] found id: ""
	I0420 01:26:55.174969  141746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:26:55.192867  141746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:26:55.203842  141746 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:26:55.203866  141746 kubeadm.go:156] found existing configuration files:
	
	I0420 01:26:55.203919  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:26:55.214476  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:26:55.214534  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:26:55.224728  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:26:55.235353  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:26:55.235403  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:26:55.245905  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:26:55.256614  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:26:55.256678  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:26:55.266909  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:26:55.276249  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:26:55.276294  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:26:55.285758  141746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:26:55.295896  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:55.418331  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:53.394623  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:55.893492  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:53.625614  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.125487  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.625414  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:55.125150  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:55.624831  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:56.125438  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:56.625450  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.125591  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.625757  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:58.124963  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.186686  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:56.681991  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:58.682958  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:56.156484  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.376987  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.450655  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.517915  141746 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:26:56.518018  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.018277  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.518215  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.538017  141746 api_server.go:72] duration metric: took 1.020104679s to wait for apiserver process to appear ...
	I0420 01:26:57.538045  141746 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:26:57.538070  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:26:58.392944  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:00.892688  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:58.625549  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:59.125177  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:59.624704  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:00.125709  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:00.625346  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.124849  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.624947  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:02.125407  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:02.625704  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:03.125695  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.182564  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:03.183451  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:02.538442  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:02.538498  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:03.396891  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:05.896375  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:03.625423  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:04.124806  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:04.625232  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.124917  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.624983  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:06.124851  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:06.625029  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:07.125554  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:07.625163  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:08.125455  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.682216  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:07.683636  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:07.538926  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:07.538973  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:08.392765  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:10.392933  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:08.625100  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:09.125395  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:09.625454  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.125615  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.624892  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:11.125366  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:11.625074  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:12.125165  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:12.625629  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:13.124824  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.182884  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:12.683893  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:12.540046  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:12.540121  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:12.393561  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:14.893756  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:13.625040  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:14.125511  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:14.624890  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.125622  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.625393  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:16.125215  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:16.625561  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:17.125263  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:17.624772  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:18.125597  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.183734  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:17.683742  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:17.540652  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:17.540701  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.076616  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": read tcp 192.168.72.1:34174->192.168.72.89:8443: read: connection reset by peer
	I0420 01:27:18.076671  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.077186  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": dial tcp 192.168.72.89:8443: connect: connection refused
	I0420 01:27:18.538798  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.539454  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": dial tcp 192.168.72.89:8443: connect: connection refused
	I0420 01:27:19.039080  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:17.393196  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:19.395273  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:18.624948  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:19.124956  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:19.625579  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:20.124827  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:20.625212  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:21.125476  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:21.125553  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:21.174633  142411 cri.go:89] found id: ""
	I0420 01:27:21.174668  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.174679  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:21.174686  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:21.174767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:21.218230  142411 cri.go:89] found id: ""
	I0420 01:27:21.218263  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.218275  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:21.218284  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:21.218369  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:21.258886  142411 cri.go:89] found id: ""
	I0420 01:27:21.258916  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.258926  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:21.258932  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:21.259003  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:21.306725  142411 cri.go:89] found id: ""
	I0420 01:27:21.306758  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.306769  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:21.306777  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:21.306843  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:21.349049  142411 cri.go:89] found id: ""
	I0420 01:27:21.349086  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.349098  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:21.349106  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:21.349174  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:21.392312  142411 cri.go:89] found id: ""
	I0420 01:27:21.392338  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.392346  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:21.392352  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:21.392425  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:21.434121  142411 cri.go:89] found id: ""
	I0420 01:27:21.434148  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.434156  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:21.434162  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:21.434210  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:21.473728  142411 cri.go:89] found id: ""
	I0420 01:27:21.473754  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.473762  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:21.473772  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:21.473785  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:21.537607  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:21.537648  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:21.554563  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:21.554604  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:21.674778  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:21.674803  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:21.674829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:21.740625  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:21.740666  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:20.182461  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:22.682574  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:24.039641  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:24.039690  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:21.397381  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:23.893642  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:24.284890  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:24.301486  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:24.301571  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:24.340987  142411 cri.go:89] found id: ""
	I0420 01:27:24.341012  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.341021  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:24.341026  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:24.341102  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:24.379983  142411 cri.go:89] found id: ""
	I0420 01:27:24.380014  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.380024  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:24.380029  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:24.380113  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:24.438700  142411 cri.go:89] found id: ""
	I0420 01:27:24.438729  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.438739  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:24.438745  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:24.438795  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:24.487761  142411 cri.go:89] found id: ""
	I0420 01:27:24.487793  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.487802  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:24.487808  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:24.487870  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:24.529408  142411 cri.go:89] found id: ""
	I0420 01:27:24.529439  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.529448  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:24.529453  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:24.529523  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:24.572782  142411 cri.go:89] found id: ""
	I0420 01:27:24.572817  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.572831  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:24.572841  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:24.572910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:24.620651  142411 cri.go:89] found id: ""
	I0420 01:27:24.620684  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.620696  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:24.620704  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:24.620769  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:24.659481  142411 cri.go:89] found id: ""
	I0420 01:27:24.659513  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.659525  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:24.659537  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:24.659552  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:24.714483  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:24.714517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:24.730279  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:24.730316  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:24.804883  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:24.804909  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:24.804926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:24.879557  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:24.879602  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:27.431026  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:27.448112  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:27.448176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:27.494959  142411 cri.go:89] found id: ""
	I0420 01:27:27.494988  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.494999  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:27.495007  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:27.495075  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:27.532023  142411 cri.go:89] found id: ""
	I0420 01:27:27.532055  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.532066  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:27.532075  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:27.532151  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:27.578551  142411 cri.go:89] found id: ""
	I0420 01:27:27.578600  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.578613  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:27.578621  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:27.578692  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:27.618248  142411 cri.go:89] found id: ""
	I0420 01:27:27.618277  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.618288  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:27.618296  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:27.618363  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:27.655682  142411 cri.go:89] found id: ""
	I0420 01:27:27.655714  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.655723  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:27.655729  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:27.655787  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:27.696355  142411 cri.go:89] found id: ""
	I0420 01:27:27.696389  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.696400  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:27.696408  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:27.696478  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:27.735354  142411 cri.go:89] found id: ""
	I0420 01:27:27.735378  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.735396  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:27.735402  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:27.735460  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:27.775234  142411 cri.go:89] found id: ""
	I0420 01:27:27.775261  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.775269  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:27.775277  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:27.775294  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:27.789970  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:27.790005  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:27.873345  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:27.873371  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:27.873387  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:27.952309  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:27.952353  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:28.003746  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:28.003792  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:24.683122  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:27.182311  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:29.040691  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:29.040743  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:26.394161  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:28.893349  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:30.893785  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:30.555691  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:30.570962  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:30.571041  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:30.613185  142411 cri.go:89] found id: ""
	I0420 01:27:30.613218  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.613227  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:30.613233  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:30.613291  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:30.654494  142411 cri.go:89] found id: ""
	I0420 01:27:30.654520  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.654529  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:30.654535  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:30.654600  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:30.702605  142411 cri.go:89] found id: ""
	I0420 01:27:30.702634  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.702646  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:30.702653  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:30.702719  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:30.742072  142411 cri.go:89] found id: ""
	I0420 01:27:30.742104  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.742115  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:30.742123  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:30.742191  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:30.793199  142411 cri.go:89] found id: ""
	I0420 01:27:30.793232  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.793244  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:30.793252  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:30.793340  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:30.832978  142411 cri.go:89] found id: ""
	I0420 01:27:30.833019  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.833034  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:30.833044  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:30.833126  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:30.875606  142411 cri.go:89] found id: ""
	I0420 01:27:30.875641  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.875655  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:30.875662  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:30.875729  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:30.917288  142411 cri.go:89] found id: ""
	I0420 01:27:30.917335  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.917348  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:30.917360  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:30.917375  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:30.996446  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:30.996469  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:30.996485  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:31.080494  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:31.080543  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:31.141226  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:31.141260  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:31.212808  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:31.212845  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:29.182651  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:31.183179  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:33.682476  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:34.041737  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:34.041789  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:33.393756  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:35.395120  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:33.728927  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:33.745749  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:33.745835  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:33.788813  142411 cri.go:89] found id: ""
	I0420 01:27:33.788845  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.788859  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:33.788868  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:33.788936  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:33.834918  142411 cri.go:89] found id: ""
	I0420 01:27:33.834948  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.834957  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:33.834963  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:33.835026  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:33.873928  142411 cri.go:89] found id: ""
	I0420 01:27:33.873960  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.873972  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:33.873977  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:33.874027  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:33.921462  142411 cri.go:89] found id: ""
	I0420 01:27:33.921497  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.921510  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:33.921519  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:33.921606  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:33.962280  142411 cri.go:89] found id: ""
	I0420 01:27:33.962308  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.962320  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:33.962329  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:33.962390  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:34.002582  142411 cri.go:89] found id: ""
	I0420 01:27:34.002616  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.002627  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:34.002635  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:34.002707  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:34.047383  142411 cri.go:89] found id: ""
	I0420 01:27:34.047410  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.047421  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:34.047428  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:34.047489  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:34.088296  142411 cri.go:89] found id: ""
	I0420 01:27:34.088341  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.088352  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:34.088364  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:34.088381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:34.180338  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:34.180380  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:34.224386  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:34.224422  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:34.278451  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:34.278488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:34.294377  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:34.294409  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:34.377115  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:36.878000  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:36.896875  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:36.896953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:36.953915  142411 cri.go:89] found id: ""
	I0420 01:27:36.953954  142411 logs.go:276] 0 containers: []
	W0420 01:27:36.953968  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:36.953977  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:36.954056  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:36.998223  142411 cri.go:89] found id: ""
	I0420 01:27:36.998250  142411 logs.go:276] 0 containers: []
	W0420 01:27:36.998260  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:36.998268  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:36.998337  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:37.069299  142411 cri.go:89] found id: ""
	I0420 01:27:37.069346  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.069358  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:37.069366  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:37.069436  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:37.112068  142411 cri.go:89] found id: ""
	I0420 01:27:37.112100  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.112112  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:37.112119  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:37.112175  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:37.155883  142411 cri.go:89] found id: ""
	I0420 01:27:37.155913  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.155924  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:37.155933  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:37.156006  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:37.200979  142411 cri.go:89] found id: ""
	I0420 01:27:37.201007  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.201018  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:37.201026  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:37.201091  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:37.241639  142411 cri.go:89] found id: ""
	I0420 01:27:37.241667  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.241678  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:37.241686  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:37.241748  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:37.281845  142411 cri.go:89] found id: ""
	I0420 01:27:37.281883  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.281894  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:37.281907  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:37.281923  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:37.327428  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:37.327463  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:37.385213  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:37.385248  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:37.400158  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:37.400190  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:37.476662  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:37.476687  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:37.476700  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:37.090819  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:27:37.090858  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:27:37.090877  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:37.124020  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:37.124076  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:37.538389  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:37.550894  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:37.550930  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:38.038486  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:38.051983  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:38.052019  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:38.538297  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:38.544961  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 200:
	ok
	I0420 01:27:38.553038  141746 api_server.go:141] control plane version: v1.30.0
	I0420 01:27:38.553065  141746 api_server.go:131] duration metric: took 41.015012791s to wait for apiserver health ...
	I0420 01:27:38.553075  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:27:38.553081  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:27:38.554687  141746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:27:35.684396  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:38.183391  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:38.555934  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:27:38.575384  141746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:27:38.609934  141746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:27:38.637152  141746 system_pods.go:59] 8 kube-system pods found
	I0420 01:27:38.637184  141746 system_pods.go:61] "coredns-7db6d8ff4d-r2hs7" [981840a2-82cd-49e0-8d4f-fbaf05290668] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:27:38.637191  141746 system_pods.go:61] "etcd-no-preload-338118" [92fc0da4-63d3-4f34-a5a6-27b73e7e210d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:27:38.637198  141746 system_pods.go:61] "kube-apiserver-no-preload-338118" [9f7bd5df-f733-4944-9ad2-0c9f0ea4529b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:27:38.637206  141746 system_pods.go:61] "kube-controller-manager-no-preload-338118" [d7a0bd6a-2cd0-4b27-ae83-ae38c1a20c63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:27:38.637215  141746 system_pods.go:61] "kube-proxy-zgq86" [d379ae65-c579-47e4-b055-6512e74868a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0420 01:27:38.637219  141746 system_pods.go:61] "kube-scheduler-no-preload-338118" [99558213-289d-4682-ba8e-20175c815563] Running
	I0420 01:27:38.637225  141746 system_pods.go:61] "metrics-server-569cc877fc-lcbcz" [1d2b716a-555a-46aa-ae27-c40553c94288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:27:38.637229  141746 system_pods.go:61] "storage-provisioner" [a8316010-8689-42aa-9741-227bf55a16bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:27:38.637236  141746 system_pods.go:74] duration metric: took 27.280844ms to wait for pod list to return data ...
	I0420 01:27:38.637243  141746 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:27:38.640744  141746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:27:38.640774  141746 node_conditions.go:123] node cpu capacity is 2
	I0420 01:27:38.640791  141746 node_conditions.go:105] duration metric: took 3.542872ms to run NodePressure ...
	I0420 01:27:38.640813  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:27:38.979785  141746 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:27:38.987541  141746 kubeadm.go:733] kubelet initialised
	I0420 01:27:38.987570  141746 kubeadm.go:734] duration metric: took 7.752383ms waiting for restarted kubelet to initialise ...
	I0420 01:27:38.987582  141746 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:27:38.994929  141746 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:38.999872  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:38.999903  141746 pod_ready.go:81] duration metric: took 4.940439ms for pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:38.999915  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:38.999923  141746 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.004575  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "etcd-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.004595  141746 pod_ready.go:81] duration metric: took 4.662163ms for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.004603  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "etcd-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.004608  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.012365  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "kube-apiserver-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.012386  141746 pod_ready.go:81] duration metric: took 7.773001ms for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.012393  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "kube-apiserver-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.012400  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.019091  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.019125  141746 pod_ready.go:81] duration metric: took 6.70398ms for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.019137  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.019146  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zgq86" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:37.894228  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:39.899004  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:40.075888  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:40.091313  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:40.091389  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:40.134013  142411 cri.go:89] found id: ""
	I0420 01:27:40.134039  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.134048  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:40.134053  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:40.134136  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:40.182108  142411 cri.go:89] found id: ""
	I0420 01:27:40.182140  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.182151  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:40.182158  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:40.182222  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:40.225406  142411 cri.go:89] found id: ""
	I0420 01:27:40.225438  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.225447  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:40.225453  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:40.225539  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:40.267599  142411 cri.go:89] found id: ""
	I0420 01:27:40.267627  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.267636  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:40.267645  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:40.267790  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:40.309385  142411 cri.go:89] found id: ""
	I0420 01:27:40.309418  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.309439  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:40.309448  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:40.309525  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:40.351947  142411 cri.go:89] found id: ""
	I0420 01:27:40.351980  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.351993  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:40.352003  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:40.352079  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:40.395583  142411 cri.go:89] found id: ""
	I0420 01:27:40.395614  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.395623  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:40.395629  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:40.395692  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:40.441348  142411 cri.go:89] found id: ""
	I0420 01:27:40.441397  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.441412  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:40.441426  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:40.441445  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:40.498231  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:40.498268  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:40.514550  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:40.514578  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:40.593580  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:40.593614  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:40.593631  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:40.671736  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:40.671778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:43.224892  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:43.240876  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:43.240939  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:43.281583  142411 cri.go:89] found id: ""
	I0420 01:27:43.281621  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.281634  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:43.281643  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:43.281705  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:43.321079  142411 cri.go:89] found id: ""
	I0420 01:27:43.321115  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.321125  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:43.321132  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:43.321277  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:43.365827  142411 cri.go:89] found id: ""
	I0420 01:27:43.365855  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.365864  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:43.365870  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:43.365921  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:40.184872  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:42.683826  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:41.025729  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:43.025868  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:45.526436  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:42.393681  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:44.401124  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:43.404317  142411 cri.go:89] found id: ""
	I0420 01:27:43.404349  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.404361  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:43.404370  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:43.404443  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:43.449268  142411 cri.go:89] found id: ""
	I0420 01:27:43.449299  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.449323  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:43.449331  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:43.449408  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:43.487782  142411 cri.go:89] found id: ""
	I0420 01:27:43.487829  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.487837  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:43.487844  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:43.487909  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:43.526650  142411 cri.go:89] found id: ""
	I0420 01:27:43.526677  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.526688  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:43.526695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:43.526755  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:43.565288  142411 cri.go:89] found id: ""
	I0420 01:27:43.565328  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.565340  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:43.565352  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:43.565368  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:43.618013  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:43.618046  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:43.634064  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:43.634101  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:43.710633  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:43.710663  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:43.710679  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:43.796658  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:43.796709  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:46.352329  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:46.366848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:46.366935  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:46.413643  142411 cri.go:89] found id: ""
	I0420 01:27:46.413676  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.413687  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:46.413695  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:46.413762  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:46.457976  142411 cri.go:89] found id: ""
	I0420 01:27:46.458002  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.458011  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:46.458020  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:46.458086  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:46.500291  142411 cri.go:89] found id: ""
	I0420 01:27:46.500317  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.500328  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:46.500334  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:46.500398  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:46.541279  142411 cri.go:89] found id: ""
	I0420 01:27:46.541331  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.541343  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:46.541359  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:46.541442  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:46.585613  142411 cri.go:89] found id: ""
	I0420 01:27:46.585642  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.585654  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:46.585661  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:46.585726  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:46.634400  142411 cri.go:89] found id: ""
	I0420 01:27:46.634430  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.634441  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:46.634450  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:46.634534  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:46.676276  142411 cri.go:89] found id: ""
	I0420 01:27:46.676305  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.676313  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:46.676320  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:46.676380  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:46.719323  142411 cri.go:89] found id: ""
	I0420 01:27:46.719356  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.719369  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:46.719381  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:46.719398  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:46.799735  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:46.799765  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:46.799790  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:46.878323  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:46.878371  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:46.931870  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:46.931902  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:46.983217  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:46.983250  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:45.182485  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:47.183499  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:47.526708  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:50.034262  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:46.897249  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:49.393599  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:49.500147  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:49.517380  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:49.517461  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:49.561300  142411 cri.go:89] found id: ""
	I0420 01:27:49.561347  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.561358  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:49.561365  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:49.561432  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:49.604569  142411 cri.go:89] found id: ""
	I0420 01:27:49.604594  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.604608  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:49.604614  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:49.604664  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:49.644952  142411 cri.go:89] found id: ""
	I0420 01:27:49.644983  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.644999  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:49.645006  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:49.645071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:49.694719  142411 cri.go:89] found id: ""
	I0420 01:27:49.694749  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.694757  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:49.694764  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:49.694815  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:49.743821  142411 cri.go:89] found id: ""
	I0420 01:27:49.743849  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.743857  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:49.743865  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:49.743936  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:49.789125  142411 cri.go:89] found id: ""
	I0420 01:27:49.789152  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.789161  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:49.789167  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:49.789233  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:49.828794  142411 cri.go:89] found id: ""
	I0420 01:27:49.828829  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.828841  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:49.828848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:49.828913  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:49.873335  142411 cri.go:89] found id: ""
	I0420 01:27:49.873366  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.873375  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:49.873385  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:49.873397  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:49.930590  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:49.930632  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:49.946850  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:49.946889  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:50.039200  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:50.039220  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:50.039236  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:50.122067  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:50.122118  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:52.664342  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:52.682978  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:52.683061  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:52.733806  142411 cri.go:89] found id: ""
	I0420 01:27:52.733836  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.733848  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:52.733855  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:52.733921  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:52.785977  142411 cri.go:89] found id: ""
	I0420 01:27:52.786008  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.786020  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:52.786027  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:52.786092  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:52.826957  142411 cri.go:89] found id: ""
	I0420 01:27:52.826987  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.826995  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:52.827001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:52.827056  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:52.876208  142411 cri.go:89] found id: ""
	I0420 01:27:52.876251  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.876265  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:52.876276  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:52.876354  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:52.918629  142411 cri.go:89] found id: ""
	I0420 01:27:52.918666  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.918679  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:52.918687  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:52.918767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:52.967604  142411 cri.go:89] found id: ""
	I0420 01:27:52.967646  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.967655  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:52.967661  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:52.967729  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:53.010948  142411 cri.go:89] found id: ""
	I0420 01:27:53.010975  142411 logs.go:276] 0 containers: []
	W0420 01:27:53.010983  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:53.010988  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:53.011039  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:53.055569  142411 cri.go:89] found id: ""
	I0420 01:27:53.055594  142411 logs.go:276] 0 containers: []
	W0420 01:27:53.055611  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:53.055620  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:53.055633  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:53.071038  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:53.071067  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:53.151334  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:53.151364  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:53.151381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:53.238509  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:53.238553  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:53.284898  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:53.284945  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:49.183562  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.682524  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:53.684003  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.027739  141746 pod_ready.go:92] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"True"
	I0420 01:27:51.027773  141746 pod_ready.go:81] duration metric: took 12.008613872s for pod "kube-proxy-zgq86" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.027785  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.033100  141746 pod_ready.go:92] pod "kube-scheduler-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:27:51.033124  141746 pod_ready.go:81] duration metric: took 5.331694ms for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.033136  141746 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:53.041387  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:55.542345  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.896822  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:54.395015  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:55.843065  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:55.856928  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:55.857001  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:55.903058  142411 cri.go:89] found id: ""
	I0420 01:27:55.903092  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.903103  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:55.903111  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:55.903170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:55.944369  142411 cri.go:89] found id: ""
	I0420 01:27:55.944402  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.944414  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:55.944421  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:55.944474  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:55.983485  142411 cri.go:89] found id: ""
	I0420 01:27:55.983510  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.983517  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:55.983523  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:55.983571  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:56.021931  142411 cri.go:89] found id: ""
	I0420 01:27:56.021956  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.021964  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:56.021970  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:56.022019  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:56.066671  142411 cri.go:89] found id: ""
	I0420 01:27:56.066705  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.066717  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:56.066724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:56.066788  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:56.107724  142411 cri.go:89] found id: ""
	I0420 01:27:56.107783  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.107794  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:56.107800  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:56.107854  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:56.149201  142411 cri.go:89] found id: ""
	I0420 01:27:56.149234  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.149246  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:56.149255  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:56.149328  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:56.189580  142411 cri.go:89] found id: ""
	I0420 01:27:56.189621  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.189633  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:56.189645  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:56.189661  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:56.243425  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:56.243462  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:56.261043  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:56.261079  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:56.341944  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:56.341967  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:56.341980  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:56.423252  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:56.423294  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:55.684408  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.183545  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:57.542492  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:00.040617  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:56.892991  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.893124  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:00.893660  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.968894  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:58.984559  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:58.984648  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:59.021603  142411 cri.go:89] found id: ""
	I0420 01:27:59.021634  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.021655  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:59.021666  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:59.021756  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:59.061592  142411 cri.go:89] found id: ""
	I0420 01:27:59.061626  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.061642  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:59.061649  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:59.061701  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:59.101956  142411 cri.go:89] found id: ""
	I0420 01:27:59.101986  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.101996  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:59.102003  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:59.102072  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:59.141104  142411 cri.go:89] found id: ""
	I0420 01:27:59.141136  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.141145  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:59.141151  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:59.141221  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:59.188973  142411 cri.go:89] found id: ""
	I0420 01:27:59.189005  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.189014  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:59.189022  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:59.189107  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:59.232598  142411 cri.go:89] found id: ""
	I0420 01:27:59.232632  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.232641  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:59.232647  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:59.232704  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:59.272623  142411 cri.go:89] found id: ""
	I0420 01:27:59.272660  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.272669  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:59.272675  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:59.272739  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:59.309951  142411 cri.go:89] found id: ""
	I0420 01:27:59.309977  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.309984  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:59.309994  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:59.310005  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:59.366589  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:59.366626  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:59.382724  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:59.382756  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:59.461072  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:59.461102  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:59.461122  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:59.544736  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:59.544769  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:02.089118  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:02.105402  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:02.105483  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:02.144665  142411 cri.go:89] found id: ""
	I0420 01:28:02.144691  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.144700  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:02.144706  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:02.144759  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:02.187471  142411 cri.go:89] found id: ""
	I0420 01:28:02.187498  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.187508  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:02.187515  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:02.187576  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:02.229206  142411 cri.go:89] found id: ""
	I0420 01:28:02.229233  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.229241  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:02.229247  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:02.229335  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:02.279425  142411 cri.go:89] found id: ""
	I0420 01:28:02.279464  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.279478  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:02.279488  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:02.279577  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:02.323033  142411 cri.go:89] found id: ""
	I0420 01:28:02.323066  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.323082  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:02.323090  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:02.323155  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:02.360121  142411 cri.go:89] found id: ""
	I0420 01:28:02.360158  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.360170  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:02.360178  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:02.360244  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:02.398756  142411 cri.go:89] found id: ""
	I0420 01:28:02.398786  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.398797  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:02.398804  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:02.398867  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:02.437982  142411 cri.go:89] found id: ""
	I0420 01:28:02.438010  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.438018  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:02.438028  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:02.438041  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:02.489396  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:02.489434  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:02.506764  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:02.506796  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:02.591894  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:02.591915  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:02.591929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:02.675241  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:02.675281  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:00.683139  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:02.684787  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:02.540829  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.041823  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:03.393076  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.396351  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.224296  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:05.238522  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:05.238593  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:05.278495  142411 cri.go:89] found id: ""
	I0420 01:28:05.278529  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.278540  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:05.278549  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:05.278621  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:05.318096  142411 cri.go:89] found id: ""
	I0420 01:28:05.318122  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.318130  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:05.318136  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:05.318196  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:05.358607  142411 cri.go:89] found id: ""
	I0420 01:28:05.358636  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.358653  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:05.358658  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:05.358749  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:05.417163  142411 cri.go:89] found id: ""
	I0420 01:28:05.417199  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.417211  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:05.417218  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:05.417284  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:05.468566  142411 cri.go:89] found id: ""
	I0420 01:28:05.468599  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.468610  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:05.468619  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:05.468691  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:05.514005  142411 cri.go:89] found id: ""
	I0420 01:28:05.514037  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.514047  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:05.514055  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:05.514112  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:05.554972  142411 cri.go:89] found id: ""
	I0420 01:28:05.555001  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.555012  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:05.555020  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:05.555083  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:05.596736  142411 cri.go:89] found id: ""
	I0420 01:28:05.596764  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.596773  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:05.596787  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:05.596800  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:05.649680  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:05.649719  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:05.667583  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:05.667614  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:05.743886  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:05.743922  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:05.743939  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:05.827827  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:05.827863  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:08.384615  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:05.181917  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.182902  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.541045  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:09.542114  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.892610  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:10.392899  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:08.401190  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:08.403071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:08.445453  142411 cri.go:89] found id: ""
	I0420 01:28:08.445486  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.445497  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:08.445505  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:08.445573  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:08.487598  142411 cri.go:89] found id: ""
	I0420 01:28:08.487636  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.487649  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:08.487657  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:08.487727  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:08.531416  142411 cri.go:89] found id: ""
	I0420 01:28:08.531445  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.531457  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:08.531465  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:08.531526  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:08.574964  142411 cri.go:89] found id: ""
	I0420 01:28:08.575000  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.575012  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:08.575020  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:08.575075  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:08.612644  142411 cri.go:89] found id: ""
	I0420 01:28:08.612679  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.612688  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:08.612695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:08.612748  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:08.651775  142411 cri.go:89] found id: ""
	I0420 01:28:08.651800  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.651811  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:08.651817  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:08.651869  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:08.692869  142411 cri.go:89] found id: ""
	I0420 01:28:08.692894  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.692902  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:08.692908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:08.692957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:08.731765  142411 cri.go:89] found id: ""
	I0420 01:28:08.731794  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.731805  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:08.731817  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:08.731836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:08.747401  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:08.747445  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:08.831069  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:08.831091  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:08.831110  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:08.919053  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:08.919095  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:08.965814  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:08.965854  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:11.518303  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:11.535213  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:11.535294  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:11.577182  142411 cri.go:89] found id: ""
	I0420 01:28:11.577214  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.577223  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:11.577229  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:11.577289  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:11.615023  142411 cri.go:89] found id: ""
	I0420 01:28:11.615055  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.615064  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:11.615070  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:11.615138  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:11.654062  142411 cri.go:89] found id: ""
	I0420 01:28:11.654089  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.654097  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:11.654104  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:11.654170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:11.700846  142411 cri.go:89] found id: ""
	I0420 01:28:11.700875  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.700885  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:11.700892  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:11.700966  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:11.743061  142411 cri.go:89] found id: ""
	I0420 01:28:11.743089  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.743100  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:11.743109  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:11.743175  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:11.783651  142411 cri.go:89] found id: ""
	I0420 01:28:11.783687  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.783698  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:11.783706  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:11.783781  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:11.827099  142411 cri.go:89] found id: ""
	I0420 01:28:11.827130  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.827139  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:11.827144  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:11.827197  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:11.867476  142411 cri.go:89] found id: ""
	I0420 01:28:11.867510  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.867523  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:11.867535  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:11.867554  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:11.920211  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:11.920246  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:11.937632  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:11.937670  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:12.014917  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:12.014940  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:12.014955  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:12.096549  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:12.096586  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:09.684447  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.183063  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.041220  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:14.540620  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.893441  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:15.408953  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:14.653783  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:14.667893  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:14.667955  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:14.710098  142411 cri.go:89] found id: ""
	I0420 01:28:14.710153  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.710164  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:14.710172  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:14.710240  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:14.750891  142411 cri.go:89] found id: ""
	I0420 01:28:14.750920  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.750929  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:14.750939  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:14.751010  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:14.794062  142411 cri.go:89] found id: ""
	I0420 01:28:14.794103  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.794127  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:14.794135  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:14.794204  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:14.834333  142411 cri.go:89] found id: ""
	I0420 01:28:14.834363  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.834375  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:14.834383  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:14.834446  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:14.874114  142411 cri.go:89] found id: ""
	I0420 01:28:14.874148  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.874160  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:14.874168  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:14.874238  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:14.912685  142411 cri.go:89] found id: ""
	I0420 01:28:14.912711  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.912720  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:14.912726  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:14.912787  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:14.954050  142411 cri.go:89] found id: ""
	I0420 01:28:14.954076  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.954083  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:14.954089  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:14.954150  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:14.992310  142411 cri.go:89] found id: ""
	I0420 01:28:14.992348  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.992357  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:14.992365  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:14.992388  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:15.047471  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:15.047512  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:15.065800  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:15.065842  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:15.146009  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:15.146037  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:15.146058  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:15.232920  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:15.232962  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:17.781215  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:17.797404  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:17.797466  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:17.840532  142411 cri.go:89] found id: ""
	I0420 01:28:17.840564  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.840573  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:17.840579  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:17.840636  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:17.881562  142411 cri.go:89] found id: ""
	I0420 01:28:17.881588  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.881596  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:17.881602  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:17.881651  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:17.935068  142411 cri.go:89] found id: ""
	I0420 01:28:17.935098  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.935108  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:17.935115  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:17.935177  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:17.980745  142411 cri.go:89] found id: ""
	I0420 01:28:17.980782  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.980795  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:17.980804  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:17.980880  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:18.051120  142411 cri.go:89] found id: ""
	I0420 01:28:18.051153  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.051164  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:18.051171  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:18.051235  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:18.091741  142411 cri.go:89] found id: ""
	I0420 01:28:18.091776  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.091788  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:18.091796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:18.091864  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:18.133438  142411 cri.go:89] found id: ""
	I0420 01:28:18.133472  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.133482  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:18.133488  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:18.133560  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:18.174624  142411 cri.go:89] found id: ""
	I0420 01:28:18.174665  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.174679  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:18.174694  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:18.174713  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:18.228519  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:18.228563  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:18.246452  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:18.246487  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:18.322051  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:18.322074  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:18.322088  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:14.684817  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:17.182405  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:16.541139  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:19.041191  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:17.895052  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:19.895901  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:18.404873  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:18.404904  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:20.950553  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:20.965081  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:20.965139  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:21.007198  142411 cri.go:89] found id: ""
	I0420 01:28:21.007243  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.007255  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:21.007263  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:21.007330  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:21.050991  142411 cri.go:89] found id: ""
	I0420 01:28:21.051019  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.051028  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:21.051034  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:21.051104  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:21.091953  142411 cri.go:89] found id: ""
	I0420 01:28:21.091986  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.091995  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:21.092001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:21.092085  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:21.134134  142411 cri.go:89] found id: ""
	I0420 01:28:21.134164  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.134174  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:21.134181  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:21.134251  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:21.173698  142411 cri.go:89] found id: ""
	I0420 01:28:21.173724  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.173731  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:21.173737  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:21.173801  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:21.221327  142411 cri.go:89] found id: ""
	I0420 01:28:21.221354  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.221362  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:21.221369  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:21.221428  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:21.262752  142411 cri.go:89] found id: ""
	I0420 01:28:21.262780  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.262791  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:21.262798  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:21.262851  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:21.303497  142411 cri.go:89] found id: ""
	I0420 01:28:21.303524  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.303535  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:21.303547  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:21.303563  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:21.358231  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:21.358265  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:21.373723  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:21.373753  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:21.465016  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:21.465044  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:21.465061  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:21.552087  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:21.552117  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:19.683617  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:22.182720  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:21.540588  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.039211  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:22.393170  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.396378  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.099938  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:24.116967  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:24.117045  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:24.159458  142411 cri.go:89] found id: ""
	I0420 01:28:24.159491  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.159501  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:24.159508  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:24.159574  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:24.206028  142411 cri.go:89] found id: ""
	I0420 01:28:24.206054  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.206065  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:24.206072  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:24.206137  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:24.248047  142411 cri.go:89] found id: ""
	I0420 01:28:24.248088  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.248101  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:24.248109  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:24.248176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:24.287867  142411 cri.go:89] found id: ""
	I0420 01:28:24.287898  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.287909  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:24.287917  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:24.287995  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:24.329399  142411 cri.go:89] found id: ""
	I0420 01:28:24.329433  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.329444  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:24.329452  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:24.329519  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:24.367846  142411 cri.go:89] found id: ""
	I0420 01:28:24.367871  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.367882  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:24.367889  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:24.367960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:24.414245  142411 cri.go:89] found id: ""
	I0420 01:28:24.414272  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.414283  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:24.414291  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:24.414354  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:24.453268  142411 cri.go:89] found id: ""
	I0420 01:28:24.453302  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.453331  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:24.453344  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:24.453366  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:24.514501  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:24.514546  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:24.529551  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:24.529591  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:24.613734  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:24.613757  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:24.613775  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:24.693804  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:24.693843  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:27.238443  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:27.254172  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:27.254235  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:27.297048  142411 cri.go:89] found id: ""
	I0420 01:28:27.297101  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.297111  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:27.297119  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:27.297181  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:27.340145  142411 cri.go:89] found id: ""
	I0420 01:28:27.340171  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.340181  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:27.340189  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:27.340316  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:27.383047  142411 cri.go:89] found id: ""
	I0420 01:28:27.383077  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.383089  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:27.383096  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:27.383169  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:27.428088  142411 cri.go:89] found id: ""
	I0420 01:28:27.428122  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.428134  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:27.428142  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:27.428206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:27.468257  142411 cri.go:89] found id: ""
	I0420 01:28:27.468300  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.468310  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:27.468317  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:27.468389  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:27.508834  142411 cri.go:89] found id: ""
	I0420 01:28:27.508873  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.508885  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:27.508892  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:27.508953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:27.548853  142411 cri.go:89] found id: ""
	I0420 01:28:27.548893  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.548901  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:27.548908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:27.548956  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:27.587841  142411 cri.go:89] found id: ""
	I0420 01:28:27.587875  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.587886  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:27.587899  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:27.587917  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:27.667848  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:27.667888  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:27.714820  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:27.714856  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:27.766337  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:27.766381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:27.782585  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:27.782627  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:27.856172  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:24.184768  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.683097  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.040531  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:28.040802  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:30.542386  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.893091  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:29.393546  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:30.356809  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:30.372449  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:30.372529  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:30.422164  142411 cri.go:89] found id: ""
	I0420 01:28:30.422198  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.422209  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:30.422218  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:30.422283  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:30.460367  142411 cri.go:89] found id: ""
	I0420 01:28:30.460395  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.460404  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:30.460411  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:30.460498  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:30.508423  142411 cri.go:89] found id: ""
	I0420 01:28:30.508460  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.508471  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:30.508479  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:30.508546  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:30.553124  142411 cri.go:89] found id: ""
	I0420 01:28:30.553152  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.553161  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:30.553167  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:30.553225  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:30.601866  142411 cri.go:89] found id: ""
	I0420 01:28:30.601908  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.601919  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:30.601939  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:30.602014  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:30.645413  142411 cri.go:89] found id: ""
	I0420 01:28:30.645446  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.645457  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:30.645467  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:30.645539  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:30.690955  142411 cri.go:89] found id: ""
	I0420 01:28:30.690988  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.690997  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:30.691006  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:30.691077  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:30.732146  142411 cri.go:89] found id: ""
	I0420 01:28:30.732186  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.732197  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:30.732209  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:30.732228  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:30.786890  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:30.786928  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:30.802887  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:30.802920  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:30.884422  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:30.884447  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:30.884461  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:30.967504  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:30.967540  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:29.183645  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:31.683218  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.684335  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.044031  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:35.540100  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:31.897363  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:34.392658  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.515720  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:33.531895  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:33.531953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:33.574626  142411 cri.go:89] found id: ""
	I0420 01:28:33.574668  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.574682  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:33.574690  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:33.574757  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:33.620527  142411 cri.go:89] found id: ""
	I0420 01:28:33.620553  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.620562  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:33.620568  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:33.620630  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:33.659685  142411 cri.go:89] found id: ""
	I0420 01:28:33.659711  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.659719  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:33.659724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:33.659773  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:33.699390  142411 cri.go:89] found id: ""
	I0420 01:28:33.699414  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.699422  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:33.699427  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:33.699485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:33.743819  142411 cri.go:89] found id: ""
	I0420 01:28:33.743844  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.743852  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:33.743858  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:33.743907  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:33.788416  142411 cri.go:89] found id: ""
	I0420 01:28:33.788442  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.788450  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:33.788456  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:33.788514  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:33.834105  142411 cri.go:89] found id: ""
	I0420 01:28:33.834129  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.834138  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:33.834144  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:33.834206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:33.884118  142411 cri.go:89] found id: ""
	I0420 01:28:33.884152  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.884164  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:33.884176  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:33.884193  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:33.940493  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:33.940525  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:33.954800  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:33.954829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:34.030788  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:34.030812  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:34.030829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:34.119533  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:34.119574  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:36.667132  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:36.684253  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:36.684334  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:36.723598  142411 cri.go:89] found id: ""
	I0420 01:28:36.723629  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.723641  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:36.723649  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:36.723718  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:36.761563  142411 cri.go:89] found id: ""
	I0420 01:28:36.761594  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.761606  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:36.761614  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:36.761679  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:36.803553  142411 cri.go:89] found id: ""
	I0420 01:28:36.803590  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.803603  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:36.803611  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:36.803674  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:36.840368  142411 cri.go:89] found id: ""
	I0420 01:28:36.840407  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.840421  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:36.840430  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:36.840497  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:36.879689  142411 cri.go:89] found id: ""
	I0420 01:28:36.879724  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.879735  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:36.879743  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:36.879807  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:36.920757  142411 cri.go:89] found id: ""
	I0420 01:28:36.920785  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.920796  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:36.920809  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:36.920871  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:36.957522  142411 cri.go:89] found id: ""
	I0420 01:28:36.957548  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.957556  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:36.957562  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:36.957624  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:36.997358  142411 cri.go:89] found id: ""
	I0420 01:28:36.997390  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.997400  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:36.997409  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:36.997422  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:37.055063  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:37.055105  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:37.070691  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:37.070720  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:37.150114  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:37.150140  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:37.150152  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:37.228676  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:37.228711  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:36.182514  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.183398  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.040622  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:40.539486  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:36.395217  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.893457  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:40.894381  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:39.776620  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:39.792201  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:39.792268  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:39.831544  142411 cri.go:89] found id: ""
	I0420 01:28:39.831568  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.831576  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:39.831588  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:39.831652  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:39.869458  142411 cri.go:89] found id: ""
	I0420 01:28:39.869488  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.869496  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:39.869503  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:39.869564  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:39.911588  142411 cri.go:89] found id: ""
	I0420 01:28:39.911615  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.911626  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:39.911633  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:39.911703  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:39.952458  142411 cri.go:89] found id: ""
	I0420 01:28:39.952489  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.952505  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:39.952513  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:39.952580  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:39.992988  142411 cri.go:89] found id: ""
	I0420 01:28:39.993016  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.993023  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:39.993029  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:39.993117  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:40.038306  142411 cri.go:89] found id: ""
	I0420 01:28:40.038348  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.038359  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:40.038367  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:40.038432  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:40.082185  142411 cri.go:89] found id: ""
	I0420 01:28:40.082219  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.082230  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:40.082238  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:40.082332  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:40.120346  142411 cri.go:89] found id: ""
	I0420 01:28:40.120373  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.120382  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:40.120391  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:40.120405  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:40.173735  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:40.173769  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:40.191808  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:40.191844  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:40.271429  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:40.271456  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:40.271473  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:40.361519  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:40.361558  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:42.938354  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:42.953088  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:42.953167  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:42.992539  142411 cri.go:89] found id: ""
	I0420 01:28:42.992564  142411 logs.go:276] 0 containers: []
	W0420 01:28:42.992571  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:42.992577  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:42.992637  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:43.032017  142411 cri.go:89] found id: ""
	I0420 01:28:43.032059  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.032074  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:43.032082  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:43.032142  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:43.077229  142411 cri.go:89] found id: ""
	I0420 01:28:43.077258  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.077266  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:43.077272  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:43.077342  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:43.117107  142411 cri.go:89] found id: ""
	I0420 01:28:43.117128  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.117139  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:43.117145  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:43.117206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:43.156262  142411 cri.go:89] found id: ""
	I0420 01:28:43.156297  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.156310  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:43.156317  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:43.156384  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:43.195897  142411 cri.go:89] found id: ""
	I0420 01:28:43.195927  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.195935  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:43.195942  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:43.195990  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:43.230468  142411 cri.go:89] found id: ""
	I0420 01:28:43.230498  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.230513  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:43.230522  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:43.230586  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:43.271980  142411 cri.go:89] found id: ""
	I0420 01:28:43.272009  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.272023  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:43.272035  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:43.272050  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:43.331606  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:43.331641  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:43.348411  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:43.348437  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 01:28:40.682973  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:43.182655  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:42.540341  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:45.039729  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:43.393377  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:45.893276  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	W0420 01:28:43.428628  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:43.428654  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:43.428675  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:43.511471  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:43.511506  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:46.056166  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:46.071677  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:46.071744  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:46.110710  142411 cri.go:89] found id: ""
	I0420 01:28:46.110740  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.110753  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:46.110761  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:46.110825  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:46.170680  142411 cri.go:89] found id: ""
	I0420 01:28:46.170712  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.170724  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:46.170731  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:46.170794  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:46.216387  142411 cri.go:89] found id: ""
	I0420 01:28:46.216413  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.216421  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:46.216429  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:46.216485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:46.258641  142411 cri.go:89] found id: ""
	I0420 01:28:46.258674  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.258685  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:46.258694  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:46.258755  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:46.296359  142411 cri.go:89] found id: ""
	I0420 01:28:46.296395  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.296407  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:46.296416  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:46.296480  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:46.335194  142411 cri.go:89] found id: ""
	I0420 01:28:46.335223  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.335238  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:46.335247  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:46.335300  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:46.373748  142411 cri.go:89] found id: ""
	I0420 01:28:46.373777  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.373789  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:46.373796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:46.373860  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:46.416960  142411 cri.go:89] found id: ""
	I0420 01:28:46.416987  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.416995  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:46.417005  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:46.417017  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:46.497542  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:46.497582  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:46.548086  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:46.548136  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:46.607354  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:46.607390  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:46.624379  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:46.624415  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:46.707425  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:45.682511  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.682752  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.046102  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:49.540014  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.895805  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:50.393001  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:49.208459  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:49.223081  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:49.223146  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:49.258688  142411 cri.go:89] found id: ""
	I0420 01:28:49.258718  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.258728  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:49.258734  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:49.258791  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:49.296817  142411 cri.go:89] found id: ""
	I0420 01:28:49.296859  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.296870  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:49.296878  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:49.296941  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:49.337821  142411 cri.go:89] found id: ""
	I0420 01:28:49.337853  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.337863  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:49.337870  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:49.337940  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:49.381360  142411 cri.go:89] found id: ""
	I0420 01:28:49.381384  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.381392  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:49.381397  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:49.381463  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:49.420099  142411 cri.go:89] found id: ""
	I0420 01:28:49.420143  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.420154  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:49.420162  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:49.420223  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:49.459810  142411 cri.go:89] found id: ""
	I0420 01:28:49.459843  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.459850  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:49.459859  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:49.459911  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:49.499776  142411 cri.go:89] found id: ""
	I0420 01:28:49.499808  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.499820  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:49.499828  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:49.499894  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:49.536115  142411 cri.go:89] found id: ""
	I0420 01:28:49.536147  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.536158  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:49.536169  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:49.536190  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:49.594665  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:49.594701  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:49.611896  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:49.611929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:49.689667  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:49.689685  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:49.689697  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:49.769061  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:49.769106  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:52.319299  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:52.336861  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:52.336934  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:52.380690  142411 cri.go:89] found id: ""
	I0420 01:28:52.380717  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.380725  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:52.380731  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:52.380781  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:52.429798  142411 cri.go:89] found id: ""
	I0420 01:28:52.429831  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.429843  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:52.429851  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:52.429915  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:52.474087  142411 cri.go:89] found id: ""
	I0420 01:28:52.474120  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.474130  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:52.474139  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:52.474204  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:52.514739  142411 cri.go:89] found id: ""
	I0420 01:28:52.514776  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.514789  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:52.514796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:52.514852  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:52.562100  142411 cri.go:89] found id: ""
	I0420 01:28:52.562195  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.562228  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:52.562236  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:52.562324  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:52.623266  142411 cri.go:89] found id: ""
	I0420 01:28:52.623301  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.623313  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:52.623321  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:52.623386  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:52.667788  142411 cri.go:89] found id: ""
	I0420 01:28:52.667818  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.667828  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:52.667838  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:52.667902  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:52.724607  142411 cri.go:89] found id: ""
	I0420 01:28:52.724636  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.724645  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:52.724654  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:52.724666  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:52.774798  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:52.774836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:52.833949  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:52.833989  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:52.851757  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:52.851787  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:52.939092  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:52.939119  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:52.939136  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:49.684112  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:52.182596  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:51.540918  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:54.039528  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:52.393913  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:54.892043  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:55.525807  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:55.540481  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:55.540557  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:55.584415  142411 cri.go:89] found id: ""
	I0420 01:28:55.584447  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.584458  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:55.584466  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:55.584538  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:55.623920  142411 cri.go:89] found id: ""
	I0420 01:28:55.623955  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.623965  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:55.623973  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:55.624037  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:55.667768  142411 cri.go:89] found id: ""
	I0420 01:28:55.667802  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.667810  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:55.667816  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:55.667889  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:55.708466  142411 cri.go:89] found id: ""
	I0420 01:28:55.708502  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.708513  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:55.708520  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:55.708600  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:55.748797  142411 cri.go:89] found id: ""
	I0420 01:28:55.748838  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.748849  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:55.748857  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:55.748919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:55.791714  142411 cri.go:89] found id: ""
	I0420 01:28:55.791743  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.791752  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:55.791761  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:55.791832  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:55.833836  142411 cri.go:89] found id: ""
	I0420 01:28:55.833862  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.833872  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:55.833879  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:55.833942  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:55.877425  142411 cri.go:89] found id: ""
	I0420 01:28:55.877462  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.877472  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:55.877484  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:55.877501  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:55.933237  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:55.933280  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:55.949507  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:55.949534  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:56.025596  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:56.025624  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:56.025641  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:56.105403  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:56.105439  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:54.683664  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.684401  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.040380  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.040834  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:00.040878  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.893067  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.894882  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.653368  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:58.669367  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:58.669429  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:58.712457  142411 cri.go:89] found id: ""
	I0420 01:28:58.712490  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.712501  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:58.712508  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:58.712574  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:58.750246  142411 cri.go:89] found id: ""
	I0420 01:28:58.750273  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.750281  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:58.750287  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:58.750351  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:58.793486  142411 cri.go:89] found id: ""
	I0420 01:28:58.793514  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.793522  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:58.793529  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:58.793595  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:58.839413  142411 cri.go:89] found id: ""
	I0420 01:28:58.839448  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.839461  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:58.839469  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:58.839537  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:58.881385  142411 cri.go:89] found id: ""
	I0420 01:28:58.881418  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.881430  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:58.881438  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:58.881509  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:58.923900  142411 cri.go:89] found id: ""
	I0420 01:28:58.923945  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.923965  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:58.923975  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:58.924038  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:58.962795  142411 cri.go:89] found id: ""
	I0420 01:28:58.962836  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.962848  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:58.962856  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:58.962919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:59.006309  142411 cri.go:89] found id: ""
	I0420 01:28:59.006341  142411 logs.go:276] 0 containers: []
	W0420 01:28:59.006350  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:59.006360  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:59.006372  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:59.062778  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:59.062819  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:59.078600  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:59.078630  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:59.159340  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:59.159361  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:59.159376  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:59.247257  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:59.247307  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:01.792687  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:01.808507  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:01.808588  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:01.851642  142411 cri.go:89] found id: ""
	I0420 01:29:01.851680  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.851691  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:01.851699  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:01.851765  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:01.891516  142411 cri.go:89] found id: ""
	I0420 01:29:01.891549  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.891560  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:01.891568  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:01.891640  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:01.934353  142411 cri.go:89] found id: ""
	I0420 01:29:01.934390  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.934402  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:01.934410  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:01.934479  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:01.972552  142411 cri.go:89] found id: ""
	I0420 01:29:01.972587  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.972599  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:01.972607  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:01.972711  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:02.012316  142411 cri.go:89] found id: ""
	I0420 01:29:02.012348  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.012360  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:02.012368  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:02.012423  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:02.056951  142411 cri.go:89] found id: ""
	I0420 01:29:02.056984  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.056994  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:02.057001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:02.057164  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:02.104061  142411 cri.go:89] found id: ""
	I0420 01:29:02.104091  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.104102  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:02.104110  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:02.104163  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:02.144085  142411 cri.go:89] found id: ""
	I0420 01:29:02.144114  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.144125  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:02.144137  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:02.144160  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:02.216560  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:02.216585  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:02.216598  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:02.307178  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:02.307222  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:02.349769  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:02.349798  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:02.401141  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:02.401176  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:59.185384  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:01.684462  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:03.685188  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:02.041060  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:04.540616  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:01.393943  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:03.894095  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:04.917513  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:04.934187  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:04.934266  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:04.970258  142411 cri.go:89] found id: ""
	I0420 01:29:04.970289  142411 logs.go:276] 0 containers: []
	W0420 01:29:04.970298  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:04.970304  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:04.970359  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:05.012853  142411 cri.go:89] found id: ""
	I0420 01:29:05.012883  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.012893  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:05.012899  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:05.012960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:05.054793  142411 cri.go:89] found id: ""
	I0420 01:29:05.054822  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.054833  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:05.054842  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:05.054910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:05.094637  142411 cri.go:89] found id: ""
	I0420 01:29:05.094674  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.094684  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:05.094701  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:05.094770  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:05.134874  142411 cri.go:89] found id: ""
	I0420 01:29:05.134903  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.134912  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:05.134918  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:05.134973  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:05.175637  142411 cri.go:89] found id: ""
	I0420 01:29:05.175668  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.175679  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:05.175687  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:05.175752  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:05.217809  142411 cri.go:89] found id: ""
	I0420 01:29:05.217847  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.217860  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:05.217867  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:05.217933  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:05.266884  142411 cri.go:89] found id: ""
	I0420 01:29:05.266917  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.266930  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:05.266941  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:05.266958  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:05.323765  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:05.323818  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:05.338524  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:05.338553  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:05.419860  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:05.419889  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:05.419906  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:05.506268  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:05.506311  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:08.055690  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:08.072692  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:08.072758  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:08.116247  142411 cri.go:89] found id: ""
	I0420 01:29:08.116287  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.116296  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:08.116304  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:08.116369  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:08.163152  142411 cri.go:89] found id: ""
	I0420 01:29:08.163177  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.163185  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:08.163190  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:08.163246  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:08.207330  142411 cri.go:89] found id: ""
	I0420 01:29:08.207357  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.207365  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:08.207371  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:08.207422  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:08.249833  142411 cri.go:89] found id: ""
	I0420 01:29:08.249864  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.249873  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:08.249879  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:08.249941  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:08.290834  142411 cri.go:89] found id: ""
	I0420 01:29:08.290867  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.290876  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:08.290883  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:08.290957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:08.333767  142411 cri.go:89] found id: ""
	I0420 01:29:08.333799  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.333809  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:08.333816  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:08.333888  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:08.381431  142411 cri.go:89] found id: ""
	I0420 01:29:08.381459  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.381468  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:08.381474  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:08.381532  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:06.183719  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.184829  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:06.544179  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:09.039956  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:06.394434  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.893184  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:10.897462  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.423702  142411 cri.go:89] found id: ""
	I0420 01:29:08.423727  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.423739  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:08.423751  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:08.423767  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:08.468422  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:08.468460  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:08.524091  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:08.524125  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:08.540294  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:08.540323  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:08.622439  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:08.622472  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:08.622488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:11.208472  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:11.225412  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:11.225479  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:11.273723  142411 cri.go:89] found id: ""
	I0420 01:29:11.273755  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.273767  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:11.273775  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:11.273840  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:11.316083  142411 cri.go:89] found id: ""
	I0420 01:29:11.316118  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.316130  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:11.316137  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:11.316203  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:11.355632  142411 cri.go:89] found id: ""
	I0420 01:29:11.355659  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.355668  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:11.355674  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:11.355734  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:11.397277  142411 cri.go:89] found id: ""
	I0420 01:29:11.397305  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.397327  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:11.397335  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:11.397399  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:11.439333  142411 cri.go:89] found id: ""
	I0420 01:29:11.439357  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.439366  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:11.439372  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:11.439433  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:11.477044  142411 cri.go:89] found id: ""
	I0420 01:29:11.477072  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.477079  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:11.477086  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:11.477142  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:11.516150  142411 cri.go:89] found id: ""
	I0420 01:29:11.516184  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.516196  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:11.516204  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:11.516274  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:11.557272  142411 cri.go:89] found id: ""
	I0420 01:29:11.557303  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.557331  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:11.557344  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:11.557366  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:11.652272  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:11.652319  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:11.700469  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:11.700504  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:11.756674  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:11.756711  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:11.772377  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:11.772407  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:11.851387  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:10.682669  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:12.684335  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:11.041282  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:13.541986  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:13.393346  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:15.394909  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:14.352257  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:14.367635  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:14.367714  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:14.408757  142411 cri.go:89] found id: ""
	I0420 01:29:14.408779  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.408788  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:14.408794  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:14.408843  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:14.455123  142411 cri.go:89] found id: ""
	I0420 01:29:14.455150  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.455159  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:14.455165  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:14.455239  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:14.499546  142411 cri.go:89] found id: ""
	I0420 01:29:14.499573  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.499581  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:14.499587  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:14.499635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:14.541811  142411 cri.go:89] found id: ""
	I0420 01:29:14.541841  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.541851  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:14.541859  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:14.541923  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:14.586965  142411 cri.go:89] found id: ""
	I0420 01:29:14.586990  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.587001  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:14.587008  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:14.587071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:14.625251  142411 cri.go:89] found id: ""
	I0420 01:29:14.625279  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.625288  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:14.625294  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:14.625377  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:14.665038  142411 cri.go:89] found id: ""
	I0420 01:29:14.665067  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.665079  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:14.665086  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:14.665157  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:14.706931  142411 cri.go:89] found id: ""
	I0420 01:29:14.706964  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.706978  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:14.706992  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:14.707044  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:14.761681  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:14.761717  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:14.776324  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:14.776350  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:14.856707  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:14.856727  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:14.856738  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:14.944019  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:14.944064  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:17.489112  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:17.507594  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:17.507660  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:17.556091  142411 cri.go:89] found id: ""
	I0420 01:29:17.556122  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.556132  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:17.556140  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:17.556205  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:17.600016  142411 cri.go:89] found id: ""
	I0420 01:29:17.600072  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.600086  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:17.600107  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:17.600171  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:17.643074  142411 cri.go:89] found id: ""
	I0420 01:29:17.643106  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.643118  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:17.643125  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:17.643190  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:17.684798  142411 cri.go:89] found id: ""
	I0420 01:29:17.684827  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.684838  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:17.684845  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:17.684910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:17.725451  142411 cri.go:89] found id: ""
	I0420 01:29:17.725481  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.725494  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:17.725503  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:17.725575  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:17.765918  142411 cri.go:89] found id: ""
	I0420 01:29:17.765944  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.765952  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:17.765959  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:17.766023  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:17.806011  142411 cri.go:89] found id: ""
	I0420 01:29:17.806038  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.806049  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:17.806056  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:17.806122  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:17.848409  142411 cri.go:89] found id: ""
	I0420 01:29:17.848441  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.848453  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:17.848465  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:17.848488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:17.903854  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:17.903900  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:17.919156  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:17.919191  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:18.008073  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:18.008115  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:18.008133  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:18.095887  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:18.095929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:14.687917  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:17.182326  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:16.039159  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:18.040487  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.540830  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:17.893270  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.392563  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.646919  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:20.664559  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:20.664635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:20.714440  142411 cri.go:89] found id: ""
	I0420 01:29:20.714472  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.714481  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:20.714487  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:20.714543  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:20.755249  142411 cri.go:89] found id: ""
	I0420 01:29:20.755276  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.755287  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:20.755294  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:20.755355  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:20.795744  142411 cri.go:89] found id: ""
	I0420 01:29:20.795777  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.795786  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:20.795797  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:20.795864  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:20.838083  142411 cri.go:89] found id: ""
	I0420 01:29:20.838111  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.838120  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:20.838128  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:20.838193  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:20.880198  142411 cri.go:89] found id: ""
	I0420 01:29:20.880227  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.880238  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:20.880245  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:20.880312  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:20.920496  142411 cri.go:89] found id: ""
	I0420 01:29:20.920522  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.920530  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:20.920536  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:20.920618  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:20.960137  142411 cri.go:89] found id: ""
	I0420 01:29:20.960170  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.960180  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:20.960186  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:20.960251  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:20.999583  142411 cri.go:89] found id: ""
	I0420 01:29:20.999624  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.999637  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:20.999649  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:20.999665  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:21.077439  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:21.077476  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:21.121104  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:21.121148  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:21.173871  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:21.173909  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:21.189767  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:21.189795  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:21.264715  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:19.682554  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:21.682995  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:22.543452  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:25.040875  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:22.393626  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:24.894279  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:23.765605  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:23.782250  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:23.782334  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:23.827248  142411 cri.go:89] found id: ""
	I0420 01:29:23.827277  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.827285  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:23.827291  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:23.827349  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:23.867610  142411 cri.go:89] found id: ""
	I0420 01:29:23.867636  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.867645  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:23.867651  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:23.867712  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:23.906244  142411 cri.go:89] found id: ""
	I0420 01:29:23.906271  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.906278  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:23.906283  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:23.906343  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:23.952256  142411 cri.go:89] found id: ""
	I0420 01:29:23.952288  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.952306  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:23.952314  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:23.952378  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:23.992843  142411 cri.go:89] found id: ""
	I0420 01:29:23.992879  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.992888  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:23.992896  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:23.992959  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:24.036460  142411 cri.go:89] found id: ""
	I0420 01:29:24.036493  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.036504  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:24.036512  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:24.036582  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:24.075910  142411 cri.go:89] found id: ""
	I0420 01:29:24.075944  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.075955  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:24.075962  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:24.076033  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:24.122638  142411 cri.go:89] found id: ""
	I0420 01:29:24.122676  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.122688  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:24.122698  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:24.122717  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:24.138022  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:24.138061  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:24.220977  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:24.220998  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:24.221012  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:24.302928  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:24.302972  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:24.351237  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:24.351277  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:26.910354  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:26.926815  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:26.926900  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:26.966123  142411 cri.go:89] found id: ""
	I0420 01:29:26.966155  142411 logs.go:276] 0 containers: []
	W0420 01:29:26.966165  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:26.966172  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:26.966246  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:27.011679  142411 cri.go:89] found id: ""
	I0420 01:29:27.011714  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.011727  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:27.011735  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:27.011806  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:27.052116  142411 cri.go:89] found id: ""
	I0420 01:29:27.052141  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.052148  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:27.052155  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:27.052202  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:27.090375  142411 cri.go:89] found id: ""
	I0420 01:29:27.090404  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.090413  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:27.090419  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:27.090476  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:27.131911  142411 cri.go:89] found id: ""
	I0420 01:29:27.131946  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.131957  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:27.131965  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:27.132033  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:27.176663  142411 cri.go:89] found id: ""
	I0420 01:29:27.176696  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.176714  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:27.176723  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:27.176788  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:27.217806  142411 cri.go:89] found id: ""
	I0420 01:29:27.217836  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.217846  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:27.217853  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:27.217917  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:27.253956  142411 cri.go:89] found id: ""
	I0420 01:29:27.253981  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.253989  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:27.253998  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:27.254014  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:27.298225  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:27.298264  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:27.351213  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:27.351259  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:27.366352  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:27.366388  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:27.466716  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:27.466742  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:27.466770  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:24.184743  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:26.681862  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:28.683193  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:27.042377  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:29.539413  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:27.395660  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:29.893947  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:30.050528  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:30.065697  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:30.065769  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:30.104643  142411 cri.go:89] found id: ""
	I0420 01:29:30.104675  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.104686  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:30.104694  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:30.104753  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:30.143864  142411 cri.go:89] found id: ""
	I0420 01:29:30.143892  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.143903  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:30.143910  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:30.143976  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:30.187925  142411 cri.go:89] found id: ""
	I0420 01:29:30.187954  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.187964  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:30.187972  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:30.188035  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:30.227968  142411 cri.go:89] found id: ""
	I0420 01:29:30.227995  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.228003  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:30.228009  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:30.228059  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:30.269550  142411 cri.go:89] found id: ""
	I0420 01:29:30.269584  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.269596  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:30.269604  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:30.269672  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:30.311777  142411 cri.go:89] found id: ""
	I0420 01:29:30.311810  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.311819  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:30.311827  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:30.311878  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:30.353569  142411 cri.go:89] found id: ""
	I0420 01:29:30.353601  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.353610  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:30.353617  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:30.353683  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:30.395003  142411 cri.go:89] found id: ""
	I0420 01:29:30.395032  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.395043  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:30.395054  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:30.395066  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:30.455495  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:30.455536  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:30.473749  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:30.473778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:30.555370  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:30.555397  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:30.555417  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:30.637079  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:30.637124  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:33.188917  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:33.203689  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:33.203757  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:33.246796  142411 cri.go:89] found id: ""
	I0420 01:29:33.246828  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.246840  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:33.246848  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:33.246911  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:33.284667  142411 cri.go:89] found id: ""
	I0420 01:29:33.284700  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.284712  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:33.284720  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:33.284782  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:33.328653  142411 cri.go:89] found id: ""
	I0420 01:29:33.328688  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.328701  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:33.328709  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:33.328777  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:33.369081  142411 cri.go:89] found id: ""
	I0420 01:29:33.369107  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.369121  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:33.369130  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:33.369180  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:30.684861  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:32.689885  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:31.547492  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:34.040445  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:31.894902  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:34.392071  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:33.414282  142411 cri.go:89] found id: ""
	I0420 01:29:33.414313  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.414322  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:33.414327  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:33.414411  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:33.457086  142411 cri.go:89] found id: ""
	I0420 01:29:33.457112  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.457119  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:33.457126  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:33.457176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:33.498686  142411 cri.go:89] found id: ""
	I0420 01:29:33.498716  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.498729  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:33.498738  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:33.498808  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:33.538872  142411 cri.go:89] found id: ""
	I0420 01:29:33.538907  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.538920  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:33.538932  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:33.538959  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:33.592586  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:33.592631  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:33.609200  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:33.609226  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:33.690795  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:33.690820  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:33.690836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:33.776092  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:33.776131  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:36.331256  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:36.348813  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:36.348892  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:36.397503  142411 cri.go:89] found id: ""
	I0420 01:29:36.397527  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.397534  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:36.397540  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:36.397603  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:36.439638  142411 cri.go:89] found id: ""
	I0420 01:29:36.439667  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.439675  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:36.439685  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:36.439761  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:36.477155  142411 cri.go:89] found id: ""
	I0420 01:29:36.477182  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.477194  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:36.477201  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:36.477259  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:36.533326  142411 cri.go:89] found id: ""
	I0420 01:29:36.533360  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.533373  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:36.533381  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:36.533446  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:36.573056  142411 cri.go:89] found id: ""
	I0420 01:29:36.573093  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.573107  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:36.573114  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:36.573177  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:36.611901  142411 cri.go:89] found id: ""
	I0420 01:29:36.611937  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.611949  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:36.611957  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:36.612017  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:36.656780  142411 cri.go:89] found id: ""
	I0420 01:29:36.656810  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.656823  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:36.656830  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:36.656899  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:36.699872  142411 cri.go:89] found id: ""
	I0420 01:29:36.699906  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.699916  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:36.699928  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:36.699943  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:36.758859  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:36.758895  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:36.775108  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:36.775145  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:36.858001  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:36.858027  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:36.858044  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:36.936114  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:36.936154  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:35.182481  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:37.182529  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:36.041125  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:38.043465  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:40.540023  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:36.395316  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:38.894062  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:40.894416  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:39.487167  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:39.502929  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:39.502995  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:39.547338  142411 cri.go:89] found id: ""
	I0420 01:29:39.547363  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.547371  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:39.547377  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:39.547430  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:39.608684  142411 cri.go:89] found id: ""
	I0420 01:29:39.608714  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.608722  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:39.608728  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:39.608793  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:39.679248  142411 cri.go:89] found id: ""
	I0420 01:29:39.679281  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.679292  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:39.679300  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:39.679361  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:39.725226  142411 cri.go:89] found id: ""
	I0420 01:29:39.725257  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.725270  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:39.725278  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:39.725363  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:39.767653  142411 cri.go:89] found id: ""
	I0420 01:29:39.767681  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.767690  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:39.767697  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:39.767760  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:39.807848  142411 cri.go:89] found id: ""
	I0420 01:29:39.807885  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.807893  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:39.807900  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:39.807968  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:39.847171  142411 cri.go:89] found id: ""
	I0420 01:29:39.847201  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.847212  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:39.847219  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:39.847284  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:39.884959  142411 cri.go:89] found id: ""
	I0420 01:29:39.884996  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.885007  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:39.885034  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:39.885050  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:39.959245  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:39.959269  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:39.959286  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:40.041394  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:40.041436  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:40.083125  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:40.083171  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:40.139902  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:40.139957  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:42.657038  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:42.673303  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:42.673407  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:42.717081  142411 cri.go:89] found id: ""
	I0420 01:29:42.717106  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.717114  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:42.717120  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:42.717170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:42.762322  142411 cri.go:89] found id: ""
	I0420 01:29:42.762357  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.762367  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:42.762375  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:42.762442  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:42.805059  142411 cri.go:89] found id: ""
	I0420 01:29:42.805112  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.805122  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:42.805131  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:42.805201  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:42.848539  142411 cri.go:89] found id: ""
	I0420 01:29:42.848568  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.848580  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:42.848587  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:42.848679  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:42.887915  142411 cri.go:89] found id: ""
	I0420 01:29:42.887949  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.887960  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:42.887967  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:42.888032  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:42.938832  142411 cri.go:89] found id: ""
	I0420 01:29:42.938867  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.938878  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:42.938888  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:42.938957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:42.982376  142411 cri.go:89] found id: ""
	I0420 01:29:42.982402  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.982409  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:42.982415  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:42.982477  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:43.023264  142411 cri.go:89] found id: ""
	I0420 01:29:43.023293  142411 logs.go:276] 0 containers: []
	W0420 01:29:43.023301  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:43.023313  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:43.023326  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:43.079673  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:43.079714  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:43.094753  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:43.094786  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:43.180113  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:43.180149  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:43.180177  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:43.259830  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:43.259872  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:39.182568  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:41.186805  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:43.683131  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:42.540687  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.039857  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:43.392948  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.394081  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.802515  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:45.816908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:45.816965  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:45.861091  142411 cri.go:89] found id: ""
	I0420 01:29:45.861123  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.861132  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:45.861138  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:45.861224  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:45.901677  142411 cri.go:89] found id: ""
	I0420 01:29:45.901702  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.901710  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:45.901716  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:45.901767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:45.938301  142411 cri.go:89] found id: ""
	I0420 01:29:45.938325  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.938334  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:45.938339  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:45.938393  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:45.978432  142411 cri.go:89] found id: ""
	I0420 01:29:45.978460  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.978473  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:45.978479  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:45.978537  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:46.019410  142411 cri.go:89] found id: ""
	I0420 01:29:46.019446  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.019455  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:46.019461  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:46.019524  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:46.071002  142411 cri.go:89] found id: ""
	I0420 01:29:46.071032  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.071041  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:46.071052  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:46.071124  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:46.110362  142411 cri.go:89] found id: ""
	I0420 01:29:46.110391  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.110402  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:46.110409  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:46.110477  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:46.152276  142411 cri.go:89] found id: ""
	I0420 01:29:46.152311  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.152322  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:46.152334  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:46.152351  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:46.205121  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:46.205159  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:46.221808  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:46.221842  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:46.300394  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:46.300418  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:46.300434  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:46.391961  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:46.392002  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:45.684038  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:48.176081  141927 pod_ready.go:81] duration metric: took 4m0.00056563s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" ...
	E0420 01:29:48.176112  141927 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:29:48.176130  141927 pod_ready.go:38] duration metric: took 4m7.024291569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:29:48.176166  141927 kubeadm.go:591] duration metric: took 4m16.819079549s to restartPrimaryControlPlane
	W0420 01:29:48.176256  141927 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:29:48.176291  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:29:47.040255  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:49.043956  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:47.893875  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:49.894291  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:48.945086  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:48.961414  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:48.961491  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:49.010230  142411 cri.go:89] found id: ""
	I0420 01:29:49.010285  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.010299  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:49.010309  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:49.010385  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:49.054455  142411 cri.go:89] found id: ""
	I0420 01:29:49.054481  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.054491  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:49.054499  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:49.054566  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:49.094536  142411 cri.go:89] found id: ""
	I0420 01:29:49.094562  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.094572  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:49.094580  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:49.094740  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:49.134004  142411 cri.go:89] found id: ""
	I0420 01:29:49.134035  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.134046  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:49.134054  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:49.134118  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:49.173697  142411 cri.go:89] found id: ""
	I0420 01:29:49.173728  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.173741  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:49.173750  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:49.173817  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:49.215655  142411 cri.go:89] found id: ""
	I0420 01:29:49.215681  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.215689  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:49.215695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:49.215745  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:49.258282  142411 cri.go:89] found id: ""
	I0420 01:29:49.258312  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.258324  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:49.258332  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:49.258394  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:49.298565  142411 cri.go:89] found id: ""
	I0420 01:29:49.298597  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.298608  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:49.298620  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:49.298638  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:49.378833  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:49.378862  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:49.378880  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:49.467477  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:49.467517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:49.521747  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:49.521788  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:49.583386  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:49.583436  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:52.102969  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:52.122971  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:52.123053  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:52.166166  142411 cri.go:89] found id: ""
	I0420 01:29:52.166199  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.166210  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:52.166219  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:52.166287  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:52.206790  142411 cri.go:89] found id: ""
	I0420 01:29:52.206817  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.206824  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:52.206830  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:52.206889  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:52.249879  142411 cri.go:89] found id: ""
	I0420 01:29:52.249911  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.249921  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:52.249931  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:52.249997  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:52.293953  142411 cri.go:89] found id: ""
	I0420 01:29:52.293997  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.294009  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:52.294018  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:52.294095  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:52.339447  142411 cri.go:89] found id: ""
	I0420 01:29:52.339478  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.339490  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:52.339497  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:52.339558  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:52.378383  142411 cri.go:89] found id: ""
	I0420 01:29:52.378416  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.378428  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:52.378435  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:52.378488  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:52.423079  142411 cri.go:89] found id: ""
	I0420 01:29:52.423121  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.423130  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:52.423137  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:52.423205  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:52.459525  142411 cri.go:89] found id: ""
	I0420 01:29:52.459559  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.459572  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:52.459594  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:52.459610  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:52.567141  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:52.567186  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:52.618194  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:52.618235  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:52.681921  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:52.681959  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:52.699065  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:52.699108  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:52.776829  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:51.540922  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:54.043224  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:52.397218  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:54.895147  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:55.277933  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:55.293380  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:55.293455  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:55.337443  142411 cri.go:89] found id: ""
	I0420 01:29:55.337475  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.337483  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:55.337491  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:55.337557  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:55.375911  142411 cri.go:89] found id: ""
	I0420 01:29:55.375942  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.375951  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:55.375957  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:55.376022  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:55.418545  142411 cri.go:89] found id: ""
	I0420 01:29:55.418569  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.418577  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:55.418583  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:55.418635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:55.459343  142411 cri.go:89] found id: ""
	I0420 01:29:55.459378  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.459390  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:55.459397  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:55.459452  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:55.503851  142411 cri.go:89] found id: ""
	I0420 01:29:55.503878  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.503887  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:55.503895  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:55.503959  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:55.542533  142411 cri.go:89] found id: ""
	I0420 01:29:55.542556  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.542562  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:55.542568  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:55.542623  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:55.582205  142411 cri.go:89] found id: ""
	I0420 01:29:55.582236  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.582246  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:55.582252  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:55.582314  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:55.624727  142411 cri.go:89] found id: ""
	I0420 01:29:55.624757  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.624769  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:55.624781  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:55.624803  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:55.675403  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:55.675438  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:55.691492  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:55.691516  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:55.772283  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:55.772313  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:55.772330  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:55.859440  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:55.859477  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:56.543221  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:59.041874  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:57.393723  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:59.894390  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:58.406009  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:58.422305  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:58.422382  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:58.468206  142411 cri.go:89] found id: ""
	I0420 01:29:58.468303  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.468321  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:58.468329  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:58.468402  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:58.513981  142411 cri.go:89] found id: ""
	I0420 01:29:58.514018  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.514027  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:58.514041  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:58.514105  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:58.559967  142411 cri.go:89] found id: ""
	I0420 01:29:58.560000  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.560011  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:58.560019  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:58.560084  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:58.600710  142411 cri.go:89] found id: ""
	I0420 01:29:58.600744  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.600763  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:58.600771  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:58.600834  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:58.645995  142411 cri.go:89] found id: ""
	I0420 01:29:58.646022  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.646030  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:58.646036  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:58.646097  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:58.684930  142411 cri.go:89] found id: ""
	I0420 01:29:58.684957  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.684965  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:58.684972  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:58.685022  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:58.727225  142411 cri.go:89] found id: ""
	I0420 01:29:58.727251  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.727259  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:58.727265  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:58.727319  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:58.765244  142411 cri.go:89] found id: ""
	I0420 01:29:58.765282  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.765293  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:58.765303  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:58.765330  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:58.817791  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:58.817822  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:58.832882  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:58.832926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:58.919297  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:58.919325  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:58.919342  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:59.002590  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:59.002637  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:01.551854  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:01.568974  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:01.569054  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:01.609165  142411 cri.go:89] found id: ""
	I0420 01:30:01.609191  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.609200  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:01.609206  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:01.609272  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:01.653349  142411 cri.go:89] found id: ""
	I0420 01:30:01.653383  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.653396  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:01.653405  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:01.653482  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:01.698961  142411 cri.go:89] found id: ""
	I0420 01:30:01.698991  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.699002  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:01.699009  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:01.699063  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:01.739230  142411 cri.go:89] found id: ""
	I0420 01:30:01.739271  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.739283  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:01.739292  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:01.739376  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:01.781839  142411 cri.go:89] found id: ""
	I0420 01:30:01.781873  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.781885  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:01.781893  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:01.781960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:01.821212  142411 cri.go:89] found id: ""
	I0420 01:30:01.821241  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.821252  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:01.821259  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:01.821339  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:01.859959  142411 cri.go:89] found id: ""
	I0420 01:30:01.859984  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.859993  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:01.859999  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:01.860060  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:01.898832  142411 cri.go:89] found id: ""
	I0420 01:30:01.898858  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.898865  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:01.898875  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:01.898886  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:01.943065  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:01.943156  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:01.995618  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:01.995654  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:02.010489  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:02.010517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:02.090181  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:02.090222  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:02.090238  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:01.541135  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.041977  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:02.394456  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.894450  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.671376  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:04.687535  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:04.687629  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:04.728732  142411 cri.go:89] found id: ""
	I0420 01:30:04.728765  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.728778  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:04.728786  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:04.728854  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:04.768537  142411 cri.go:89] found id: ""
	I0420 01:30:04.768583  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.768602  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:04.768610  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:04.768676  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:04.811714  142411 cri.go:89] found id: ""
	I0420 01:30:04.811741  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.811750  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:04.811756  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:04.811816  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:04.852324  142411 cri.go:89] found id: ""
	I0420 01:30:04.852360  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.852371  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:04.852379  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:04.852452  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:04.891657  142411 cri.go:89] found id: ""
	I0420 01:30:04.891688  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.891700  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:04.891708  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:04.891774  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:04.933192  142411 cri.go:89] found id: ""
	I0420 01:30:04.933222  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.933230  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:04.933236  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:04.933291  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:04.972796  142411 cri.go:89] found id: ""
	I0420 01:30:04.972819  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.972828  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:04.972834  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:04.972888  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:05.014782  142411 cri.go:89] found id: ""
	I0420 01:30:05.014821  142411 logs.go:276] 0 containers: []
	W0420 01:30:05.014833  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:05.014846  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:05.014862  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:05.067438  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:05.067470  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:05.121336  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:05.121371  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:05.137495  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:05.137529  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:05.214132  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:05.214153  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:05.214170  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:07.796964  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:07.810856  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:07.810917  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:07.846993  142411 cri.go:89] found id: ""
	I0420 01:30:07.847024  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.847033  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:07.847040  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:07.847089  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:07.886422  142411 cri.go:89] found id: ""
	I0420 01:30:07.886452  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.886464  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:07.886474  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:07.886567  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:07.942200  142411 cri.go:89] found id: ""
	I0420 01:30:07.942230  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.942238  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:07.942245  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:07.942296  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:07.980179  142411 cri.go:89] found id: ""
	I0420 01:30:07.980215  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.980226  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:07.980235  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:07.980299  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:08.020097  142411 cri.go:89] found id: ""
	I0420 01:30:08.020130  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.020140  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:08.020145  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:08.020215  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:08.063793  142411 cri.go:89] found id: ""
	I0420 01:30:08.063837  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.063848  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:08.063857  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:08.063930  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:08.108674  142411 cri.go:89] found id: ""
	I0420 01:30:08.108705  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.108716  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:08.108724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:08.108798  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:08.147467  142411 cri.go:89] found id: ""
	I0420 01:30:08.147495  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.147503  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:08.147512  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:08.147525  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:08.239416  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:08.239466  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:08.294639  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:08.294669  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:08.349753  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:08.349795  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:08.368971  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:08.369003  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 01:30:06.540958  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:08.541701  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:06.898857  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:09.397590  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	W0420 01:30:08.449996  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:10.950318  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:10.964969  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:10.965032  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:11.006321  142411 cri.go:89] found id: ""
	I0420 01:30:11.006354  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.006365  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:11.006375  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:11.006437  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:11.047982  142411 cri.go:89] found id: ""
	I0420 01:30:11.048010  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.048019  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:11.048025  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:11.048073  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:11.089185  142411 cri.go:89] found id: ""
	I0420 01:30:11.089217  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.089226  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:11.089232  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:11.089287  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:11.131293  142411 cri.go:89] found id: ""
	I0420 01:30:11.131322  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.131335  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:11.131344  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:11.131398  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:11.170394  142411 cri.go:89] found id: ""
	I0420 01:30:11.170419  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.170427  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:11.170432  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:11.170485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:11.210580  142411 cri.go:89] found id: ""
	I0420 01:30:11.210619  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.210631  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:11.210640  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:11.210706  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:11.251938  142411 cri.go:89] found id: ""
	I0420 01:30:11.251977  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.251990  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:11.251998  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:11.252064  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:11.295999  142411 cri.go:89] found id: ""
	I0420 01:30:11.296033  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.296045  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:11.296057  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:11.296072  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:11.378564  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:11.378632  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:11.422836  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:11.422868  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:11.475893  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:11.475928  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:11.491524  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:11.491555  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:11.569066  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:11.041078  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:13.540339  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:15.541762  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:11.893724  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:14.394206  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:14.886464  142057 pod_ready.go:81] duration metric: took 4m0.00077804s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" ...
	E0420 01:30:14.886500  142057 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:30:14.886528  142057 pod_ready.go:38] duration metric: took 4m14.554070758s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:14.886572  142057 kubeadm.go:591] duration metric: took 4m22.173690393s to restartPrimaryControlPlane
	W0420 01:30:14.886657  142057 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:30:14.886691  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:30:14.070158  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:14.086000  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:14.086067  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:14.128864  142411 cri.go:89] found id: ""
	I0420 01:30:14.128894  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.128906  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:14.128914  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:14.128986  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:14.169447  142411 cri.go:89] found id: ""
	I0420 01:30:14.169482  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.169497  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:14.169506  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:14.169583  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:14.210007  142411 cri.go:89] found id: ""
	I0420 01:30:14.210043  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.210054  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:14.210062  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:14.210119  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:14.247652  142411 cri.go:89] found id: ""
	I0420 01:30:14.247685  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.247695  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:14.247703  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:14.247764  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:14.290788  142411 cri.go:89] found id: ""
	I0420 01:30:14.290820  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.290830  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:14.290847  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:14.290908  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:14.351514  142411 cri.go:89] found id: ""
	I0420 01:30:14.351548  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.351570  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:14.351581  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:14.351637  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:14.423481  142411 cri.go:89] found id: ""
	I0420 01:30:14.423520  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.423534  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:14.423543  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:14.423615  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:14.465597  142411 cri.go:89] found id: ""
	I0420 01:30:14.465622  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.465630  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:14.465639  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:14.465655  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:14.522669  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:14.522705  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:14.541258  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:14.541293  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:14.618657  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:14.618678  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:14.618691  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:14.702616  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:14.702658  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:17.256212  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:17.277171  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:17.277250  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:17.321548  142411 cri.go:89] found id: ""
	I0420 01:30:17.321582  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.321600  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:17.321607  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:17.321676  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:17.362856  142411 cri.go:89] found id: ""
	I0420 01:30:17.362883  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.362890  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:17.362896  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:17.362966  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:17.409494  142411 cri.go:89] found id: ""
	I0420 01:30:17.409525  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.409539  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:17.409548  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:17.409631  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:17.447759  142411 cri.go:89] found id: ""
	I0420 01:30:17.447801  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.447812  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:17.447819  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:17.447885  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:17.498416  142411 cri.go:89] found id: ""
	I0420 01:30:17.498444  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.498454  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:17.498460  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:17.498528  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:17.546025  142411 cri.go:89] found id: ""
	I0420 01:30:17.546055  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.546064  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:17.546072  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:17.546138  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:17.585797  142411 cri.go:89] found id: ""
	I0420 01:30:17.585829  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.585840  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:17.585848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:17.585919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:17.630850  142411 cri.go:89] found id: ""
	I0420 01:30:17.630886  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.630899  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:17.630911  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:17.630926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:17.689472  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:17.689510  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:17.705603  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:17.705642  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:17.794094  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:17.794137  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:17.794155  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:17.879397  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:17.879435  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:18.041437  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:20.044174  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:20.428142  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:20.444936  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:20.445018  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:20.487317  142411 cri.go:89] found id: ""
	I0420 01:30:20.487354  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.487365  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:20.487373  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:20.487443  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:20.537209  142411 cri.go:89] found id: ""
	I0420 01:30:20.537241  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.537254  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:20.537262  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:20.537348  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:20.584311  142411 cri.go:89] found id: ""
	I0420 01:30:20.584343  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.584352  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:20.584357  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:20.584413  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:20.631915  142411 cri.go:89] found id: ""
	I0420 01:30:20.631948  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.631959  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:20.631969  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:20.632040  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:20.679680  142411 cri.go:89] found id: ""
	I0420 01:30:20.679707  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.679716  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:20.679721  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:20.679770  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:20.724967  142411 cri.go:89] found id: ""
	I0420 01:30:20.725002  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.725013  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:20.725027  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:20.725091  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:20.772717  142411 cri.go:89] found id: ""
	I0420 01:30:20.772751  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.772762  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:20.772771  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:20.772837  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:20.812421  142411 cri.go:89] found id: ""
	I0420 01:30:20.812449  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.812460  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:20.812471  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:20.812485  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:20.870522  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:20.870554  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:20.886764  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:20.886793  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:20.963941  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:20.963964  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:20.963979  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:21.045738  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:21.045778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:20.850989  141927 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.674674204s)
	I0420 01:30:20.851082  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:20.868537  141927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:20.880284  141927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:20.891650  141927 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:20.891672  141927 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:20.891726  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0420 01:30:20.902443  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:20.902509  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:20.913476  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0420 01:30:20.923762  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:20.923836  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:20.934281  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0420 01:30:20.944194  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:20.944254  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:20.955506  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0420 01:30:20.968039  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:20.968107  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:20.978918  141927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:21.214688  141927 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:30:22.539778  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:24.543547  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:23.600037  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:23.616539  142411 kubeadm.go:591] duration metric: took 4m4.142686832s to restartPrimaryControlPlane
	W0420 01:30:23.616641  142411 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:30:23.616676  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:30:25.481285  142411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.864573977s)
	I0420 01:30:25.481385  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:25.500950  142411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:25.518624  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:25.532506  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:25.532531  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:25.532584  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:30:25.546634  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:25.546708  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:25.561379  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:30:25.575506  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:25.575627  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:25.590615  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:30:25.604855  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:25.604923  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:25.619717  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:30:25.634525  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:25.634607  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:25.649408  142411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:25.735636  142411 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:30:25.735697  142411 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:25.913199  142411 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:25.913347  142411 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:25.913483  142411 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:26.120240  142411 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:26.122066  142411 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:26.122169  142411 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:26.122279  142411 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:26.122395  142411 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:26.122499  142411 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:26.122623  142411 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:26.122715  142411 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:26.122806  142411 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:26.122898  142411 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:26.122999  142411 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:26.123113  142411 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:26.123173  142411 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:26.123244  142411 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:26.243908  142411 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:26.354349  142411 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:26.605778  142411 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:26.833914  142411 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:26.855348  142411 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:26.857029  142411 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:26.857250  142411 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:27.010707  142411 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:27.012314  142411 out.go:204]   - Booting up control plane ...
	I0420 01:30:27.012456  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:27.036284  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:27.049123  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:27.050561  142411 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:27.053222  142411 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:30:30.213456  141927 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:30:30.213557  141927 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:30.213687  141927 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:30.213826  141927 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:30.213915  141927 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:30.213978  141927 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:30.215501  141927 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:30.215594  141927 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:30.215667  141927 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:30.215802  141927 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:30.215886  141927 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:30.215960  141927 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:30.216018  141927 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:30.216097  141927 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:30.216156  141927 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:30.216258  141927 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:30.216350  141927 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:30.216385  141927 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:30.216447  141927 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:30.216517  141927 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:30.216589  141927 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:30:30.216653  141927 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:30.216743  141927 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:30.216832  141927 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:30.216933  141927 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:30.217019  141927 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:30.218228  141927 out.go:204]   - Booting up control plane ...
	I0420 01:30:30.218341  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:30.218446  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:30.218516  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:30.218615  141927 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:30.218703  141927 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:30.218753  141927 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:30.218904  141927 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:30:30.218975  141927 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:30:30.219027  141927 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001925972s
	I0420 01:30:30.219128  141927 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:30:30.219216  141927 kubeadm.go:309] [api-check] The API server is healthy after 5.502367015s
	I0420 01:30:30.219336  141927 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:30:30.219504  141927 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:30:30.219576  141927 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:30:30.219816  141927 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-907988 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:30:30.219880  141927 kubeadm.go:309] [bootstrap-token] Using token: ozlrl4.y5r3psi4bnl35gso
	I0420 01:30:30.221283  141927 out.go:204]   - Configuring RBAC rules ...
	I0420 01:30:30.221416  141927 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:30:30.221533  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:30:30.221728  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:30:30.221968  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:30:30.222146  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:30:30.222255  141927 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:30:30.222385  141927 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:30:30.222455  141927 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:30:30.222524  141927 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:30:30.222534  141927 kubeadm.go:309] 
	I0420 01:30:30.222614  141927 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:30:30.222628  141927 kubeadm.go:309] 
	I0420 01:30:30.222692  141927 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:30:30.222699  141927 kubeadm.go:309] 
	I0420 01:30:30.222723  141927 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:30:30.222772  141927 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:30:30.222815  141927 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:30:30.222821  141927 kubeadm.go:309] 
	I0420 01:30:30.222878  141927 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:30:30.222885  141927 kubeadm.go:309] 
	I0420 01:30:30.222923  141927 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:30:30.222929  141927 kubeadm.go:309] 
	I0420 01:30:30.222994  141927 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:30:30.223100  141927 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:30:30.223171  141927 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:30:30.223189  141927 kubeadm.go:309] 
	I0420 01:30:30.223281  141927 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:30:30.223346  141927 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:30:30.223354  141927 kubeadm.go:309] 
	I0420 01:30:30.223423  141927 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token ozlrl4.y5r3psi4bnl35gso \
	I0420 01:30:30.223527  141927 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:30:30.223552  141927 kubeadm.go:309] 	--control-plane 
	I0420 01:30:30.223559  141927 kubeadm.go:309] 
	I0420 01:30:30.223627  141927 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:30:30.223635  141927 kubeadm.go:309] 
	I0420 01:30:30.223704  141927 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token ozlrl4.y5r3psi4bnl35gso \
	I0420 01:30:30.223811  141927 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:30:30.223826  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:30:30.223833  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:30:30.225184  141927 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:30:27.041383  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:29.540967  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:30.226237  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:30:30.241388  141927 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:30:30.274356  141927 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:30:30.274469  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:30.274503  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-907988 minikube.k8s.io/updated_at=2024_04_20T01_30_30_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=default-k8s-diff-port-907988 minikube.k8s.io/primary=true
	I0420 01:30:30.319402  141927 ops.go:34] apiserver oom_adj: -16
	I0420 01:30:30.505362  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:31.006101  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:31.505679  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.005947  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.505747  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:33.005919  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:33.505449  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:34.006029  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.040710  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:34.541175  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:34.505846  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:35.006187  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:35.505618  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:36.005994  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:36.506217  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.006428  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.506359  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:38.006018  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:38.505454  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:39.006426  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.041157  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:39.542266  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:39.506227  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:40.005941  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:40.506123  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:41.006198  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:41.506244  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:42.006045  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:42.505458  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:43.006082  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:43.122481  141927 kubeadm.go:1107] duration metric: took 12.84807935s to wait for elevateKubeSystemPrivileges
	W0420 01:30:43.122525  141927 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:30:43.122535  141927 kubeadm.go:393] duration metric: took 5m11.83456536s to StartCluster
	I0420 01:30:43.122559  141927 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:30:43.122689  141927 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:30:43.124746  141927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:30:43.125059  141927 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:30:43.126572  141927 out.go:177] * Verifying Kubernetes components...
	I0420 01:30:43.125129  141927 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:30:43.125301  141927 config.go:182] Loaded profile config "default-k8s-diff-port-907988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:30:43.128187  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:30:43.128231  141927 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-907988"
	I0420 01:30:43.128240  141927 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-907988"
	I0420 01:30:43.128277  141927 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-907988"
	I0420 01:30:43.128278  141927 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-907988"
	W0420 01:30:43.128288  141927 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:30:43.128302  141927 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-907988"
	I0420 01:30:43.128352  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.128769  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.128795  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.128840  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.128800  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.128306  141927 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-907988"
	W0420 01:30:43.128994  141927 addons.go:243] addon metrics-server should already be in state true
	I0420 01:30:43.129026  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.129378  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.129401  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.148251  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41797
	I0420 01:30:43.148272  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39865
	I0420 01:30:43.148503  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33785
	I0420 01:30:43.148959  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.148985  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.149060  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.149605  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149626  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.149683  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149688  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149698  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.149706  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.150105  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150108  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150106  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150358  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.150703  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.150733  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.150760  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.150798  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.154242  141927 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-907988"
	W0420 01:30:43.154266  141927 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:30:43.154300  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.154673  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.154715  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.167283  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46477
	I0420 01:30:43.167925  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.168475  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.168496  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.168868  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.169094  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.171067  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45101
	I0420 01:30:43.171384  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.173102  141927 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:30:43.171760  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.172823  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I0420 01:30:43.174639  141927 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:30:43.174661  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:30:43.174681  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.174859  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.175307  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.175331  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.175460  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.175476  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.175799  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.175992  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.176361  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.176376  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.176686  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.178744  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.178848  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.180048  141927 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:30:43.179462  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.181257  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:30:43.181275  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:30:43.181289  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.181296  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.179641  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.182168  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.182437  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.182627  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.184562  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.184958  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.184985  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.185241  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.185430  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.185621  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.185771  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.195778  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I0420 01:30:43.196419  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.196979  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.197002  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.197763  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.198072  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.200177  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.200480  141927 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:30:43.200497  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:30:43.200516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.204078  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.204128  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.204154  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.204178  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.204275  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.204456  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.204582  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.375731  141927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:30:43.424911  141927 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-907988" to be "Ready" ...
	I0420 01:30:43.436729  141927 node_ready.go:49] node "default-k8s-diff-port-907988" has status "Ready":"True"
	I0420 01:30:43.436750  141927 node_ready.go:38] duration metric: took 11.810027ms for node "default-k8s-diff-port-907988" to be "Ready" ...
	I0420 01:30:43.436759  141927 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:43.445452  141927 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:43.497224  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:30:43.526236  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:30:43.527573  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:30:43.527597  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:30:43.591844  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:30:43.591872  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:30:43.655692  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:30:43.655721  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:30:43.824523  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:30:44.808651  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.311370016s)
	I0420 01:30:44.808721  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.808724  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.282444767s)
	I0420 01:30:44.808735  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.808767  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.808783  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809052  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809066  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809074  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.809081  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809144  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809162  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809170  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.809179  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809626  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809635  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809647  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809655  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809626  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Closing plugin on server side
	I0420 01:30:44.833935  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.833963  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.834326  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.834348  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.316084  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.491512905s)
	I0420 01:30:45.316157  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:45.316177  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:45.316514  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:45.316539  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.316593  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:45.316610  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:45.316910  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:45.316989  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.317007  141927 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-907988"
	I0420 01:30:45.316906  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Closing plugin on server side
	I0420 01:30:45.319289  141927 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0420 01:30:42.040865  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:44.042663  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:45.320468  141927 addons.go:505] duration metric: took 2.195343987s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0420 01:30:45.453717  141927 pod_ready.go:102] pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:45.952010  141927 pod_ready.go:92] pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.952032  141927 pod_ready.go:81] duration metric: took 2.506556645s for pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.952040  141927 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.957512  141927 pod_ready.go:92] pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.957533  141927 pod_ready.go:81] duration metric: took 5.486362ms for pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.957541  141927 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.962790  141927 pod_ready.go:92] pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.962810  141927 pod_ready.go:81] duration metric: took 5.261485ms for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.962821  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.968720  141927 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.968743  141927 pod_ready.go:81] duration metric: took 5.914425ms for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.968754  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.976930  141927 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.976946  141927 pod_ready.go:81] duration metric: took 8.183898ms for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.976954  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jt8wr" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.350179  141927 pod_ready.go:92] pod "kube-proxy-jt8wr" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:46.350203  141927 pod_ready.go:81] duration metric: took 373.241134ms for pod "kube-proxy-jt8wr" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.350212  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.749542  141927 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:46.749566  141927 pod_ready.go:81] duration metric: took 399.34726ms for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.749573  141927 pod_ready.go:38] duration metric: took 3.312805349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:46.749587  141927 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:30:46.749647  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:46.785318  141927 api_server.go:72] duration metric: took 3.660207577s to wait for apiserver process to appear ...
	I0420 01:30:46.785349  141927 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:30:46.785373  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:30:46.793933  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 200:
	ok
	I0420 01:30:46.794890  141927 api_server.go:141] control plane version: v1.30.0
	I0420 01:30:46.794911  141927 api_server.go:131] duration metric: took 9.555146ms to wait for apiserver health ...
	I0420 01:30:46.794920  141927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:30:46.953036  141927 system_pods.go:59] 9 kube-system pods found
	I0420 01:30:46.953066  141927 system_pods.go:61] "coredns-7db6d8ff4d-g2nzn" [d07ba546-0251-4862-ad1b-0c3d5ee7b1f3] Running
	I0420 01:30:46.953070  141927 system_pods.go:61] "coredns-7db6d8ff4d-p8dhp" [4bf589b6-f54b-4615-b95e-b95c89766e24] Running
	I0420 01:30:46.953074  141927 system_pods.go:61] "etcd-default-k8s-diff-port-907988" [f2711b7c-9d31-4586-bcf0-345ef2c9e62a] Running
	I0420 01:30:46.953077  141927 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-907988" [7a4fccc8-90d5-4467-8925-df5d8e1e128a] Running
	I0420 01:30:46.953081  141927 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-907988" [68350b12-3244-4565-ab06-6d7ad5876935] Running
	I0420 01:30:46.953085  141927 system_pods.go:61] "kube-proxy-jt8wr" [a9ddf3ce-29f8-437d-bd31-89411c135012] Running
	I0420 01:30:46.953088  141927 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-907988" [f0ff044b-0c2a-4105-9373-34abfbf6b68a] Running
	I0420 01:30:46.953094  141927 system_pods.go:61] "metrics-server-569cc877fc-6rgpj" [70cba472-11c4-4604-a4ad-3575ccedf005] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:30:46.953098  141927 system_pods.go:61] "storage-provisioner" [739478ce-5d74-4be0-8a39-d80245d8aa8a] Running
	I0420 01:30:46.953108  141927 system_pods.go:74] duration metric: took 158.182751ms to wait for pod list to return data ...
	I0420 01:30:46.953116  141927 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:30:47.151205  141927 default_sa.go:45] found service account: "default"
	I0420 01:30:47.151245  141927 default_sa.go:55] duration metric: took 198.121475ms for default service account to be created ...
	I0420 01:30:47.151274  141927 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:30:47.354321  141927 system_pods.go:86] 9 kube-system pods found
	I0420 01:30:47.354348  141927 system_pods.go:89] "coredns-7db6d8ff4d-g2nzn" [d07ba546-0251-4862-ad1b-0c3d5ee7b1f3] Running
	I0420 01:30:47.354353  141927 system_pods.go:89] "coredns-7db6d8ff4d-p8dhp" [4bf589b6-f54b-4615-b95e-b95c89766e24] Running
	I0420 01:30:47.354358  141927 system_pods.go:89] "etcd-default-k8s-diff-port-907988" [f2711b7c-9d31-4586-bcf0-345ef2c9e62a] Running
	I0420 01:30:47.354364  141927 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-907988" [7a4fccc8-90d5-4467-8925-df5d8e1e128a] Running
	I0420 01:30:47.354369  141927 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-907988" [68350b12-3244-4565-ab06-6d7ad5876935] Running
	I0420 01:30:47.354373  141927 system_pods.go:89] "kube-proxy-jt8wr" [a9ddf3ce-29f8-437d-bd31-89411c135012] Running
	I0420 01:30:47.354376  141927 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-907988" [f0ff044b-0c2a-4105-9373-34abfbf6b68a] Running
	I0420 01:30:47.354383  141927 system_pods.go:89] "metrics-server-569cc877fc-6rgpj" [70cba472-11c4-4604-a4ad-3575ccedf005] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:30:47.354387  141927 system_pods.go:89] "storage-provisioner" [739478ce-5d74-4be0-8a39-d80245d8aa8a] Running
	I0420 01:30:47.354395  141927 system_pods.go:126] duration metric: took 203.115923ms to wait for k8s-apps to be running ...
	I0420 01:30:47.354403  141927 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:30:47.354452  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:47.370946  141927 system_svc.go:56] duration metric: took 16.532953ms WaitForService to wait for kubelet
	I0420 01:30:47.370977  141927 kubeadm.go:576] duration metric: took 4.245884115s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:30:47.370997  141927 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:30:47.550097  141927 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:30:47.550127  141927 node_conditions.go:123] node cpu capacity is 2
	I0420 01:30:47.550138  141927 node_conditions.go:105] duration metric: took 179.136105ms to run NodePressure ...
	I0420 01:30:47.550150  141927 start.go:240] waiting for startup goroutines ...
	I0420 01:30:47.550156  141927 start.go:245] waiting for cluster config update ...
	I0420 01:30:47.550167  141927 start.go:254] writing updated cluster config ...
	I0420 01:30:47.550493  141927 ssh_runner.go:195] Run: rm -f paused
	I0420 01:30:47.614715  141927 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:30:47.616658  141927 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-907988" cluster and "default" namespace by default
	I0420 01:30:47.623645  142057 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.736926697s)
	I0420 01:30:47.623716  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:47.648132  142057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:47.662521  142057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:47.674241  142057 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:47.674265  142057 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:47.674311  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:30:47.684981  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:47.685037  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:47.696549  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:30:47.706838  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:47.706885  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:47.717387  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:30:47.732194  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:47.732252  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:47.743425  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:30:47.756579  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:47.756629  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:47.769210  142057 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:47.832909  142057 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:30:47.832972  142057 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:47.987090  142057 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:47.987209  142057 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:47.987380  142057 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:48.253287  142057 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:48.255451  142057 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:48.255552  142057 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:48.255657  142057 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:48.255767  142057 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:48.255880  142057 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:48.255992  142057 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:48.256076  142057 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:48.256170  142057 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:48.256250  142057 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:48.256344  142057 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:48.256445  142057 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:48.256500  142057 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:48.256563  142057 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:48.346357  142057 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:48.602240  142057 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:30:48.741597  142057 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:49.086311  142057 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:49.284340  142057 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:49.284671  142057 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:49.287663  142057 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:46.540199  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:48.540848  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:50.541579  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:49.289305  142057 out.go:204]   - Booting up control plane ...
	I0420 01:30:49.289430  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:49.289558  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:49.289646  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:49.309520  142057 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:49.311328  142057 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:49.311389  142057 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:49.448766  142057 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:30:49.448889  142057 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:30:49.950225  142057 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.460713ms
	I0420 01:30:49.950316  142057 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:30:55.452587  142057 kubeadm.go:309] [api-check] The API server is healthy after 5.502061843s
	I0420 01:30:55.466768  142057 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:30:55.500892  142057 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:30:55.538376  142057 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:30:55.538631  142057 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-269507 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:30:55.559344  142057 kubeadm.go:309] [bootstrap-token] Using token: jtn2hn.nnhc9vssv65463xy
	I0420 01:30:52.542748  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:55.040878  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:55.560872  142057 out.go:204]   - Configuring RBAC rules ...
	I0420 01:30:55.561022  142057 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:30:55.575617  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:30:55.583307  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:30:55.586398  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:30:55.596138  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:30:55.599717  142057 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:30:55.861367  142057 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:30:56.310991  142057 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:30:56.860904  142057 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:30:56.860939  142057 kubeadm.go:309] 
	I0420 01:30:56.861051  142057 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:30:56.861077  142057 kubeadm.go:309] 
	I0420 01:30:56.861180  142057 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:30:56.861201  142057 kubeadm.go:309] 
	I0420 01:30:56.861232  142057 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:30:56.861345  142057 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:30:56.861438  142057 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:30:56.861454  142057 kubeadm.go:309] 
	I0420 01:30:56.861534  142057 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:30:56.861544  142057 kubeadm.go:309] 
	I0420 01:30:56.861628  142057 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:30:56.861644  142057 kubeadm.go:309] 
	I0420 01:30:56.861728  142057 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:30:56.861822  142057 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:30:56.861895  142057 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:30:56.861923  142057 kubeadm.go:309] 
	I0420 01:30:56.862120  142057 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:30:56.862228  142057 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:30:56.862246  142057 kubeadm.go:309] 
	I0420 01:30:56.862371  142057 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jtn2hn.nnhc9vssv65463xy \
	I0420 01:30:56.862532  142057 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:30:56.862571  142057 kubeadm.go:309] 	--control-plane 
	I0420 01:30:56.862580  142057 kubeadm.go:309] 
	I0420 01:30:56.862700  142057 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:30:56.862724  142057 kubeadm.go:309] 
	I0420 01:30:56.862827  142057 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jtn2hn.nnhc9vssv65463xy \
	I0420 01:30:56.862955  142057 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:30:56.863259  142057 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:30:56.863343  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:30:56.863358  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:30:56.865193  142057 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:30:57.541555  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:00.040222  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:56.866515  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:30:56.880013  142057 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:30:56.900677  142057 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:30:56.900773  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:56.900809  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-269507 minikube.k8s.io/updated_at=2024_04_20T01_30_56_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=embed-certs-269507 minikube.k8s.io/primary=true
	I0420 01:30:56.942362  142057 ops.go:34] apiserver oom_adj: -16
	I0420 01:30:57.124807  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:57.625201  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:58.125867  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:58.625845  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:59.124923  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:59.625004  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:00.125467  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:00.625081  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:01.125446  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.539751  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:04.540090  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:01.625279  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.125084  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.625048  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:03.125567  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:03.625428  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:04.125592  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:04.625874  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:05.125031  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:05.625698  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:06.125620  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.054009  142411 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:31:07.054375  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:07.054708  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:06.625682  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.125909  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.625563  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:08.125451  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:08.625265  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.125677  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.625433  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.720318  142057 kubeadm.go:1107] duration metric: took 12.81961115s to wait for elevateKubeSystemPrivileges
	W0420 01:31:09.720362  142057 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:31:09.720373  142057 kubeadm.go:393] duration metric: took 5m17.067399347s to StartCluster
	I0420 01:31:09.720426  142057 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:31:09.720552  142057 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:31:09.722646  142057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:31:09.722904  142057 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:31:09.724771  142057 out.go:177] * Verifying Kubernetes components...
	I0420 01:31:09.722979  142057 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:31:09.723175  142057 config.go:182] Loaded profile config "embed-certs-269507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:31:09.724863  142057 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-269507"
	I0420 01:31:09.726208  142057 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-269507"
	W0420 01:31:09.726229  142057 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:31:09.724870  142057 addons.go:69] Setting default-storageclass=true in profile "embed-certs-269507"
	I0420 01:31:09.726270  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.726289  142057 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-269507"
	I0420 01:31:09.724889  142057 addons.go:69] Setting metrics-server=true in profile "embed-certs-269507"
	I0420 01:31:09.726351  142057 addons.go:234] Setting addon metrics-server=true in "embed-certs-269507"
	W0420 01:31:09.726365  142057 addons.go:243] addon metrics-server should already be in state true
	I0420 01:31:09.726395  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.726159  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:31:09.726699  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726737  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726771  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726785  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.726803  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.726793  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.742932  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41221
	I0420 01:31:09.743143  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I0420 01:31:09.743375  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.743666  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.743951  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.743968  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.744102  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.744120  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.744439  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.744497  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.745152  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.745162  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.745178  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.745195  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.745923  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40633
	I0420 01:31:09.746441  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.747173  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.747202  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.747637  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.747934  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.751736  142057 addons.go:234] Setting addon default-storageclass=true in "embed-certs-269507"
	W0420 01:31:09.751760  142057 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:31:09.751791  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.752174  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.752199  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.763296  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40627
	I0420 01:31:09.763475  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41617
	I0420 01:31:09.764103  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.764119  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.764635  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.764656  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.764807  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.764821  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.765353  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.765369  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.765562  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.766352  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.767675  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.769455  142057 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:31:09.768866  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.770529  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:31:09.770596  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:31:09.770618  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.771959  142057 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:31:07.039635  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:09.040381  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:09.772109  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
	I0420 01:31:09.773531  142057 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:31:09.773545  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:31:09.773560  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.773989  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.774697  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.774711  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.774889  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.775069  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.775522  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.775550  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.775770  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.775840  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.775855  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.775973  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.776144  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.776283  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.776967  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.777306  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.777376  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.777621  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.777811  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.777949  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.778092  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.791609  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37301
	I0420 01:31:09.792008  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.792475  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.792492  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.792811  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.793110  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.794743  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.795008  142057 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:31:09.795023  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:31:09.795037  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.797655  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.798120  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.798144  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.798394  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.798603  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.798745  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.798888  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.957088  142057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:31:10.012344  142057 node_ready.go:35] waiting up to 6m0s for node "embed-certs-269507" to be "Ready" ...
	I0420 01:31:10.023887  142057 node_ready.go:49] node "embed-certs-269507" has status "Ready":"True"
	I0420 01:31:10.023917  142057 node_ready.go:38] duration metric: took 11.536403ms for node "embed-certs-269507" to be "Ready" ...
	I0420 01:31:10.023929  142057 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:10.035096  142057 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:10.210022  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:31:10.222715  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:31:10.251807  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:31:10.251836  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:31:10.342638  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:31:10.342664  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:31:10.480676  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:31:10.480700  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:31:10.655186  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:31:11.331066  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.121005107s)
	I0420 01:31:11.331125  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.108375538s)
	I0420 01:31:11.331139  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331152  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331165  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331181  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331530  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331601  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331611  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331641  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331664  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331681  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331684  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331692  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331699  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331646  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331932  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331959  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331979  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331991  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331989  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.332003  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.364269  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.364296  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.364641  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.364667  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.364671  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.809229  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.154002194s)
	I0420 01:31:11.809282  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.809301  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.809618  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.809676  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.809688  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.809705  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.809717  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.809954  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.809983  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.810001  142057 addons.go:470] Verifying addon metrics-server=true in "embed-certs-269507"
	I0420 01:31:11.810004  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.811610  142057 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0420 01:31:12.055506  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:12.055793  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:11.813049  142057 addons.go:505] duration metric: took 2.090078148s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0420 01:31:12.044618  142057 pod_ready.go:102] pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:12.565519  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.565543  142057 pod_ready.go:81] duration metric: took 2.530392572s for pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.565552  142057 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.577986  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.578011  142057 pod_ready.go:81] duration metric: took 12.452506ms for pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.578020  142057 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.595104  142057 pod_ready.go:92] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.595129  142057 pod_ready.go:81] duration metric: took 17.103577ms for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.595139  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.602502  142057 pod_ready.go:92] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.602524  142057 pod_ready.go:81] duration metric: took 7.377832ms for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.602538  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.608443  142057 pod_ready.go:92] pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.608462  142057 pod_ready.go:81] duration metric: took 5.916781ms for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.608471  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4x66x" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.939418  142057 pod_ready.go:92] pod "kube-proxy-4x66x" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.939444  142057 pod_ready.go:81] duration metric: took 330.966964ms for pod "kube-proxy-4x66x" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.939454  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:13.341528  142057 pod_ready.go:92] pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:13.341556  142057 pod_ready.go:81] duration metric: took 402.093841ms for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:13.341565  142057 pod_ready.go:38] duration metric: took 3.317622631s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:13.341583  142057 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:31:13.341648  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:31:13.361938  142057 api_server.go:72] duration metric: took 3.638999445s to wait for apiserver process to appear ...
	I0420 01:31:13.361967  142057 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:31:13.361987  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:31:13.367149  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0420 01:31:13.368215  142057 api_server.go:141] control plane version: v1.30.0
	I0420 01:31:13.368243  142057 api_server.go:131] duration metric: took 6.268859ms to wait for apiserver health ...
	I0420 01:31:13.368254  142057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:31:13.545177  142057 system_pods.go:59] 9 kube-system pods found
	I0420 01:31:13.545203  142057 system_pods.go:61] "coredns-7db6d8ff4d-ltzhp" [fca2da30-b908-46fc-a028-d43a17c6307e] Running
	I0420 01:31:13.545207  142057 system_pods.go:61] "coredns-7db6d8ff4d-mpf5l" [331105fe-dd08-409f-9b2d-658b958cd1a2] Running
	I0420 01:31:13.545212  142057 system_pods.go:61] "etcd-embed-certs-269507" [7dc38a73-8614-42d0-afb5-f2ffdbb8ef1b] Running
	I0420 01:31:13.545215  142057 system_pods.go:61] "kube-apiserver-embed-certs-269507" [c6741448-01ad-4be4-a120-c69b27fbc818] Running
	I0420 01:31:13.545219  142057 system_pods.go:61] "kube-controller-manager-embed-certs-269507" [003fc040-4032-4ff8-99af-71305dae664c] Running
	I0420 01:31:13.545222  142057 system_pods.go:61] "kube-proxy-4x66x" [75da8306-56f8-49bf-a2e7-cf5d4877dc16] Running
	I0420 01:31:13.545224  142057 system_pods.go:61] "kube-scheduler-embed-certs-269507" [86a64ec5-dd53-4702-9dea-8dbab58b38e3] Running
	I0420 01:31:13.545230  142057 system_pods.go:61] "metrics-server-569cc877fc-jwbst" [4d13a078-f3cd-43c2-8f15-fe5c36445294] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:31:13.545233  142057 system_pods.go:61] "storage-provisioner" [8eee97ab-bb31-4a3d-be80-845b6545e897] Running
	I0420 01:31:13.545242  142057 system_pods.go:74] duration metric: took 176.980813ms to wait for pod list to return data ...
	I0420 01:31:13.545249  142057 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:31:13.739865  142057 default_sa.go:45] found service account: "default"
	I0420 01:31:13.739892  142057 default_sa.go:55] duration metric: took 194.636223ms for default service account to be created ...
	I0420 01:31:13.739903  142057 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:31:13.942758  142057 system_pods.go:86] 9 kube-system pods found
	I0420 01:31:13.942785  142057 system_pods.go:89] "coredns-7db6d8ff4d-ltzhp" [fca2da30-b908-46fc-a028-d43a17c6307e] Running
	I0420 01:31:13.942793  142057 system_pods.go:89] "coredns-7db6d8ff4d-mpf5l" [331105fe-dd08-409f-9b2d-658b958cd1a2] Running
	I0420 01:31:13.942801  142057 system_pods.go:89] "etcd-embed-certs-269507" [7dc38a73-8614-42d0-afb5-f2ffdbb8ef1b] Running
	I0420 01:31:13.942812  142057 system_pods.go:89] "kube-apiserver-embed-certs-269507" [c6741448-01ad-4be4-a120-c69b27fbc818] Running
	I0420 01:31:13.942819  142057 system_pods.go:89] "kube-controller-manager-embed-certs-269507" [003fc040-4032-4ff8-99af-71305dae664c] Running
	I0420 01:31:13.942829  142057 system_pods.go:89] "kube-proxy-4x66x" [75da8306-56f8-49bf-a2e7-cf5d4877dc16] Running
	I0420 01:31:13.942835  142057 system_pods.go:89] "kube-scheduler-embed-certs-269507" [86a64ec5-dd53-4702-9dea-8dbab58b38e3] Running
	I0420 01:31:13.942846  142057 system_pods.go:89] "metrics-server-569cc877fc-jwbst" [4d13a078-f3cd-43c2-8f15-fe5c36445294] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:31:13.942854  142057 system_pods.go:89] "storage-provisioner" [8eee97ab-bb31-4a3d-be80-845b6545e897] Running
	I0420 01:31:13.942863  142057 system_pods.go:126] duration metric: took 202.954629ms to wait for k8s-apps to be running ...
	I0420 01:31:13.942873  142057 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:31:13.942926  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:31:13.962754  142057 system_svc.go:56] duration metric: took 19.872903ms WaitForService to wait for kubelet
	I0420 01:31:13.962781  142057 kubeadm.go:576] duration metric: took 4.239850872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:31:13.962802  142057 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:31:14.139800  142057 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:31:14.139834  142057 node_conditions.go:123] node cpu capacity is 2
	I0420 01:31:14.139848  142057 node_conditions.go:105] duration metric: took 177.041675ms to run NodePressure ...
	I0420 01:31:14.139862  142057 start.go:240] waiting for startup goroutines ...
	I0420 01:31:14.139872  142057 start.go:245] waiting for cluster config update ...
	I0420 01:31:14.139886  142057 start.go:254] writing updated cluster config ...
	I0420 01:31:14.140201  142057 ssh_runner.go:195] Run: rm -f paused
	I0420 01:31:14.190985  142057 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:31:14.193207  142057 out.go:177] * Done! kubectl is now configured to use "embed-certs-269507" cluster and "default" namespace by default
	I0420 01:31:11.040724  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:13.043491  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:15.540182  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:17.540894  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:19.541858  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:22.056094  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:22.056315  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:22.039484  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:24.043137  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:26.043262  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:28.540379  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:30.540568  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:32.543371  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:35.040187  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:37.541354  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:40.039779  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:42.057024  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:42.057278  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:42.040147  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:44.540170  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:46.540576  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:48.543604  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:51.034230  141746 pod_ready.go:81] duration metric: took 4m0.001077028s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" ...
	E0420 01:31:51.034258  141746 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:31:51.034280  141746 pod_ready.go:38] duration metric: took 4m12.046687249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:51.034308  141746 kubeadm.go:591] duration metric: took 4m55.947094434s to restartPrimaryControlPlane
	W0420 01:31:51.034367  141746 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:31:51.034400  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:32:22.058965  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:32:22.059213  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:32:22.059231  142411 kubeadm.go:309] 
	I0420 01:32:22.059284  142411 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:32:22.059341  142411 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:32:22.059351  142411 kubeadm.go:309] 
	I0420 01:32:22.059398  142411 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:32:22.059449  142411 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:32:22.059581  142411 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:32:22.059606  142411 kubeadm.go:309] 
	I0420 01:32:22.059693  142411 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:32:22.059725  142411 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:32:22.059796  142411 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:32:22.059821  142411 kubeadm.go:309] 
	I0420 01:32:22.059916  142411 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:32:22.060046  142411 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:32:22.060068  142411 kubeadm.go:309] 
	I0420 01:32:22.060225  142411 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:32:22.060371  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:32:22.060498  142411 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:32:22.060624  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:32:22.060643  142411 kubeadm.go:309] 
	I0420 01:32:22.061155  142411 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:22.061294  142411 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:32:22.061403  142411 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0420 01:32:22.061569  142411 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0420 01:32:22.061628  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:32:23.211059  142411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.149398853s)
	I0420 01:32:23.211147  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:23.228140  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:32:23.240832  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:32:23.240868  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:32:23.240912  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:32:23.252674  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:32:23.252735  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:32:23.264128  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:32:23.274998  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:32:23.275059  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:32:23.286449  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.297377  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:32:23.297452  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.308971  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:32:23.320775  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:32:23.320842  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:32:23.333601  142411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:32:23.490252  141746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.455825605s)
	I0420 01:32:23.490330  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:23.515027  141746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:32:23.528835  141746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:32:23.542901  141746 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:32:23.542927  141746 kubeadm.go:156] found existing configuration files:
	
	I0420 01:32:23.542969  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:32:23.554931  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:32:23.555006  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:32:23.570665  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:32:23.583505  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:32:23.583576  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:32:23.595835  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.607468  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:32:23.607538  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.620629  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:32:23.634141  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:32:23.634222  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:32:23.648360  141746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:32:23.727697  141746 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:32:23.727825  141746 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:32:23.899280  141746 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:32:23.899376  141746 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:32:23.899456  141746 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:32:24.139299  141746 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:32:24.141410  141746 out.go:204]   - Generating certificates and keys ...
	I0420 01:32:24.141522  141746 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:32:24.141618  141746 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:32:24.141719  141746 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:32:24.141814  141746 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:32:24.141912  141746 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:32:24.141987  141746 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:32:24.142076  141746 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:32:24.142172  141746 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:32:24.142348  141746 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:32:24.142589  141746 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:32:24.142757  141746 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:32:24.142990  141746 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:32:24.247270  141746 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:32:24.326535  141746 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:32:24.538489  141746 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:32:24.594810  141746 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:32:24.712812  141746 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:32:24.713304  141746 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:32:24.719376  141746 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:32:24.721510  141746 out.go:204]   - Booting up control plane ...
	I0420 01:32:24.721649  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:32:24.721781  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:32:24.722470  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:32:24.748410  141746 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:32:24.750247  141746 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:32:24.750320  141746 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:32:24.906734  141746 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:32:24.906859  141746 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:32:25.409625  141746 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.844847ms
	I0420 01:32:25.409771  141746 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:32:23.603058  142411 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:30.912062  141746 kubeadm.go:309] [api-check] The API server is healthy after 5.502434175s
	I0420 01:32:30.935231  141746 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:32:30.954860  141746 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:32:30.990255  141746 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:32:30.990480  141746 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-338118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:32:31.004218  141746 kubeadm.go:309] [bootstrap-token] Using token: 6ub3et.0wyu42zodual4kt8
	I0420 01:32:31.005771  141746 out.go:204]   - Configuring RBAC rules ...
	I0420 01:32:31.005875  141746 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:32:31.011978  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:32:31.020750  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:32:31.024958  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:32:31.032499  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:32:31.037128  141746 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:32:31.320324  141746 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:32:31.761773  141746 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:32:32.322540  141746 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:32:32.322563  141746 kubeadm.go:309] 
	I0420 01:32:32.322633  141746 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:32:32.322648  141746 kubeadm.go:309] 
	I0420 01:32:32.322728  141746 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:32:32.322737  141746 kubeadm.go:309] 
	I0420 01:32:32.322763  141746 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:32:32.322833  141746 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:32:32.322906  141746 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:32:32.322918  141746 kubeadm.go:309] 
	I0420 01:32:32.323005  141746 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:32:32.323015  141746 kubeadm.go:309] 
	I0420 01:32:32.323083  141746 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:32:32.323110  141746 kubeadm.go:309] 
	I0420 01:32:32.323184  141746 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:32:32.323304  141746 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:32:32.323362  141746 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:32:32.323372  141746 kubeadm.go:309] 
	I0420 01:32:32.323522  141746 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:32:32.323660  141746 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:32:32.323677  141746 kubeadm.go:309] 
	I0420 01:32:32.323765  141746 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6ub3et.0wyu42zodual4kt8 \
	I0420 01:32:32.323916  141746 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:32:32.323948  141746 kubeadm.go:309] 	--control-plane 
	I0420 01:32:32.323957  141746 kubeadm.go:309] 
	I0420 01:32:32.324035  141746 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:32:32.324049  141746 kubeadm.go:309] 
	I0420 01:32:32.324201  141746 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6ub3et.0wyu42zodual4kt8 \
	I0420 01:32:32.324348  141746 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:32:32.324967  141746 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:32.325210  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:32:32.325228  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:32:32.327624  141746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:32:32.329029  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:32:32.344181  141746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:32:32.368978  141746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:32:32.369052  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:32.369086  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-338118 minikube.k8s.io/updated_at=2024_04_20T01_32_32_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=no-preload-338118 minikube.k8s.io/primary=true
	I0420 01:32:32.579160  141746 ops.go:34] apiserver oom_adj: -16
	I0420 01:32:32.579218  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:33.079458  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:33.579498  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:34.079957  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:34.579520  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:35.079902  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:35.579955  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:36.079525  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:36.579612  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:37.079831  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:37.579989  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:38.079481  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:38.579798  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:39.080239  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:39.579654  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:40.080267  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:40.579837  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:41.079840  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:41.579347  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:42.079368  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:42.579641  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:43.079257  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:43.579647  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.079317  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.580002  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.698993  141746 kubeadm.go:1107] duration metric: took 12.330007154s to wait for elevateKubeSystemPrivileges
	W0420 01:32:44.699036  141746 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:32:44.699045  141746 kubeadm.go:393] duration metric: took 5m49.674421659s to StartCluster
	I0420 01:32:44.699064  141746 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:32:44.699166  141746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:32:44.700731  141746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:32:44.700982  141746 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:32:44.702752  141746 out.go:177] * Verifying Kubernetes components...
	I0420 01:32:44.701040  141746 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:32:44.701201  141746 config.go:182] Loaded profile config "no-preload-338118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:32:44.704065  141746 addons.go:69] Setting storage-provisioner=true in profile "no-preload-338118"
	I0420 01:32:44.704078  141746 addons.go:69] Setting metrics-server=true in profile "no-preload-338118"
	I0420 01:32:44.704077  141746 addons.go:69] Setting default-storageclass=true in profile "no-preload-338118"
	I0420 01:32:44.704099  141746 addons.go:234] Setting addon storage-provisioner=true in "no-preload-338118"
	W0420 01:32:44.704105  141746 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:32:44.704114  141746 addons.go:234] Setting addon metrics-server=true in "no-preload-338118"
	I0420 01:32:44.704113  141746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-338118"
	W0420 01:32:44.704124  141746 addons.go:243] addon metrics-server should already be in state true
	I0420 01:32:44.704151  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.704157  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.704069  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:32:44.704452  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704485  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.704503  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704521  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704535  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.704545  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.720663  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34001
	I0420 01:32:44.720685  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0420 01:32:44.721210  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.721222  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.721746  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.721766  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.721901  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.721925  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.722282  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.722311  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.722860  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.722860  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.722889  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.722914  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.723194  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39919
	I0420 01:32:44.723775  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.724401  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.724427  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.724790  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.724975  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.728728  141746 addons.go:234] Setting addon default-storageclass=true in "no-preload-338118"
	W0420 01:32:44.728751  141746 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:32:44.728780  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.729136  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.729161  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.738505  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37139
	I0420 01:32:44.738893  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.739388  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.739409  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.739916  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.740120  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.741929  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37217
	I0420 01:32:44.742090  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.744131  141746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:32:44.742538  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.745561  141746 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:32:44.745579  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:32:44.745597  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.744662  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.745640  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.745994  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.746345  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.747491  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0420 01:32:44.747878  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.748594  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.748731  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.748752  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.750445  141746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:32:44.749050  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.749380  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.749990  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.752010  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:32:44.752029  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:32:44.752046  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.752131  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.752155  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.752307  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.752479  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.752647  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.752676  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.752676  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.754727  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.755188  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.755216  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.755497  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.755696  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.755866  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.756034  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.768442  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I0420 01:32:44.768887  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.769453  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.769473  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.769852  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.770359  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.772155  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.772443  141746 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:32:44.772651  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:32:44.772686  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.775775  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.776177  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.776205  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.776313  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.776492  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.776667  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.776832  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.930301  141746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:32:44.948472  141746 node_ready.go:35] waiting up to 6m0s for node "no-preload-338118" to be "Ready" ...
	I0420 01:32:44.960637  141746 node_ready.go:49] node "no-preload-338118" has status "Ready":"True"
	I0420 01:32:44.960664  141746 node_ready.go:38] duration metric: took 12.15407ms for node "no-preload-338118" to be "Ready" ...
	I0420 01:32:44.960676  141746 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:32:44.971143  141746 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.980894  141746 pod_ready.go:92] pod "etcd-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:44.980917  141746 pod_ready.go:81] duration metric: took 9.749994ms for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.980929  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.995192  141746 pod_ready.go:92] pod "kube-apiserver-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:44.995217  141746 pod_ready.go:81] duration metric: took 14.279681ms for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.995229  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.004302  141746 pod_ready.go:92] pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:45.004324  141746 pod_ready.go:81] duration metric: took 9.086713ms for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.004338  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f57d9" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.062482  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:32:45.066314  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:32:45.066334  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:32:45.093830  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:32:45.148558  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:32:45.148600  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:32:45.235321  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:32:45.235349  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:32:45.275661  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:32:46.686292  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.592425062s)
	I0420 01:32:46.686344  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.623774979s)
	I0420 01:32:46.686360  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686375  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686385  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686401  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686822  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.686897  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.686911  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686920  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686835  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.686839  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687001  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.687013  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.687027  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686850  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.687153  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687166  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.687359  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687373  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.697988  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.422274698s)
	I0420 01:32:46.698045  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.698059  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.698320  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.698339  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.698351  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.698359  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.698568  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.698658  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.698676  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.698687  141746 addons.go:470] Verifying addon metrics-server=true in "no-preload-338118"
	I0420 01:32:46.733170  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.733198  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.733551  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.733573  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.733605  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.735297  141746 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0420 01:32:46.736665  141746 addons.go:505] duration metric: took 2.035625149s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0420 01:32:47.011271  141746 pod_ready.go:92] pod "kube-proxy-f57d9" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:47.011299  141746 pod_ready.go:81] duration metric: took 2.006954798s for pod "kube-proxy-f57d9" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.011309  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.025378  141746 pod_ready.go:92] pod "kube-scheduler-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:47.025408  141746 pod_ready.go:81] duration metric: took 14.090474ms for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.025421  141746 pod_ready.go:38] duration metric: took 2.064731781s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:32:47.025443  141746 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:32:47.025511  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:32:47.052680  141746 api_server.go:72] duration metric: took 2.351656586s to wait for apiserver process to appear ...
	I0420 01:32:47.052712  141746 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:32:47.052738  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:32:47.061908  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 200:
	ok
	I0420 01:32:47.065615  141746 api_server.go:141] control plane version: v1.30.0
	I0420 01:32:47.065641  141746 api_server.go:131] duration metric: took 12.920384ms to wait for apiserver health ...
	I0420 01:32:47.065651  141746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:32:47.158039  141746 system_pods.go:59] 9 kube-system pods found
	I0420 01:32:47.158076  141746 system_pods.go:61] "coredns-7db6d8ff4d-8jvsz" [d83784a0-6942-4906-ba66-76d7fa25dc04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.158087  141746 system_pods.go:61] "coredns-7db6d8ff4d-lhnxg" [c0fb3119-abcb-4646-9aae-a54438a76adf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.158096  141746 system_pods.go:61] "etcd-no-preload-338118" [1ff1cf84-276b-45c4-9da9-8266ee15a4f6] Running
	I0420 01:32:47.158101  141746 system_pods.go:61] "kube-apiserver-no-preload-338118" [313150c1-d21e-43d5-8ae0-6331e5007a66] Running
	I0420 01:32:47.158107  141746 system_pods.go:61] "kube-controller-manager-no-preload-338118" [eef34e56-ed71-4e76-a732-341878f3f90d] Running
	I0420 01:32:47.158113  141746 system_pods.go:61] "kube-proxy-f57d9" [54252f52-9bb1-48a2-98e1-980f40fa727d] Running
	I0420 01:32:47.158117  141746 system_pods.go:61] "kube-scheduler-no-preload-338118" [4491c2f0-7b45-4c78-b91e-8fcbbcc890fd] Running
	I0420 01:32:47.158126  141746 system_pods.go:61] "metrics-server-569cc877fc-xbwdm" [798c7b61-a93d-4daf-a832-e15056a2ae24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:32:47.158134  141746 system_pods.go:61] "storage-provisioner" [51c12418-805f-4923-b7ab-4fa0fe07ec9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:32:47.158147  141746 system_pods.go:74] duration metric: took 92.489697ms to wait for pod list to return data ...
	I0420 01:32:47.158162  141746 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:32:47.351962  141746 default_sa.go:45] found service account: "default"
	I0420 01:32:47.352002  141746 default_sa.go:55] duration metric: took 193.830142ms for default service account to be created ...
	I0420 01:32:47.352016  141746 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:32:47.557471  141746 system_pods.go:86] 9 kube-system pods found
	I0420 01:32:47.557511  141746 system_pods.go:89] "coredns-7db6d8ff4d-8jvsz" [d83784a0-6942-4906-ba66-76d7fa25dc04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.557524  141746 system_pods.go:89] "coredns-7db6d8ff4d-lhnxg" [c0fb3119-abcb-4646-9aae-a54438a76adf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.557534  141746 system_pods.go:89] "etcd-no-preload-338118" [1ff1cf84-276b-45c4-9da9-8266ee15a4f6] Running
	I0420 01:32:47.557540  141746 system_pods.go:89] "kube-apiserver-no-preload-338118" [313150c1-d21e-43d5-8ae0-6331e5007a66] Running
	I0420 01:32:47.557547  141746 system_pods.go:89] "kube-controller-manager-no-preload-338118" [eef34e56-ed71-4e76-a732-341878f3f90d] Running
	I0420 01:32:47.557554  141746 system_pods.go:89] "kube-proxy-f57d9" [54252f52-9bb1-48a2-98e1-980f40fa727d] Running
	I0420 01:32:47.557564  141746 system_pods.go:89] "kube-scheduler-no-preload-338118" [4491c2f0-7b45-4c78-b91e-8fcbbcc890fd] Running
	I0420 01:32:47.557577  141746 system_pods.go:89] "metrics-server-569cc877fc-xbwdm" [798c7b61-a93d-4daf-a832-e15056a2ae24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:32:47.557589  141746 system_pods.go:89] "storage-provisioner" [51c12418-805f-4923-b7ab-4fa0fe07ec9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:32:47.557602  141746 system_pods.go:126] duration metric: took 205.577946ms to wait for k8s-apps to be running ...
	I0420 01:32:47.557615  141746 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:32:47.557674  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:47.577745  141746 system_svc.go:56] duration metric: took 20.111982ms WaitForService to wait for kubelet
	I0420 01:32:47.577774  141746 kubeadm.go:576] duration metric: took 2.876759476s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:32:47.577794  141746 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:32:47.753216  141746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:32:47.753246  141746 node_conditions.go:123] node cpu capacity is 2
	I0420 01:32:47.753257  141746 node_conditions.go:105] duration metric: took 175.457668ms to run NodePressure ...
	I0420 01:32:47.753269  141746 start.go:240] waiting for startup goroutines ...
	I0420 01:32:47.753275  141746 start.go:245] waiting for cluster config update ...
	I0420 01:32:47.753286  141746 start.go:254] writing updated cluster config ...
	I0420 01:32:47.753612  141746 ssh_runner.go:195] Run: rm -f paused
	I0420 01:32:47.804681  141746 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:32:47.806823  141746 out.go:177] * Done! kubectl is now configured to use "no-preload-338118" cluster and "default" namespace by default
	I0420 01:34:20.028550  142411 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:34:20.028769  142411 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0420 01:34:20.030749  142411 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:34:20.030826  142411 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:34:20.030947  142411 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:34:20.031078  142411 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:34:20.031217  142411 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:34:20.031319  142411 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:34:20.032927  142411 out.go:204]   - Generating certificates and keys ...
	I0420 01:34:20.033024  142411 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:34:20.033110  142411 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:34:20.033211  142411 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:34:20.033286  142411 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:34:20.033410  142411 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:34:20.033496  142411 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:34:20.033597  142411 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:34:20.033695  142411 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:34:20.033805  142411 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:34:20.033921  142411 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:34:20.033972  142411 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:34:20.034042  142411 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:34:20.034125  142411 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:34:20.034200  142411 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:34:20.034287  142411 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:34:20.034355  142411 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:34:20.034510  142411 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:34:20.034614  142411 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:34:20.034680  142411 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:34:20.034760  142411 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:34:20.036300  142411 out.go:204]   - Booting up control plane ...
	I0420 01:34:20.036380  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:34:20.036479  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:34:20.036583  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:34:20.036705  142411 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:34:20.036888  142411 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:34:20.036955  142411 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:34:20.037046  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037228  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037291  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037494  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037576  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037730  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037789  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037977  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.038044  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.038262  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.038284  142411 kubeadm.go:309] 
	I0420 01:34:20.038341  142411 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:34:20.038382  142411 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:34:20.038396  142411 kubeadm.go:309] 
	I0420 01:34:20.038443  142411 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:34:20.038476  142411 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:34:20.038612  142411 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:34:20.038625  142411 kubeadm.go:309] 
	I0420 01:34:20.038735  142411 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:34:20.038767  142411 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:34:20.038794  142411 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:34:20.038808  142411 kubeadm.go:309] 
	I0420 01:34:20.038902  142411 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:34:20.038977  142411 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:34:20.038987  142411 kubeadm.go:309] 
	I0420 01:34:20.039101  142411 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:34:20.039203  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:34:20.039274  142411 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:34:20.039342  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:34:20.039384  142411 kubeadm.go:309] 
	I0420 01:34:20.039417  142411 kubeadm.go:393] duration metric: took 8m0.622979268s to StartCluster
	I0420 01:34:20.039459  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:34:20.039514  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:34:20.090236  142411 cri.go:89] found id: ""
	I0420 01:34:20.090262  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.090270  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:34:20.090276  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:34:20.090331  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:34:20.133841  142411 cri.go:89] found id: ""
	I0420 01:34:20.133867  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.133875  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:34:20.133883  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:34:20.133955  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:34:20.176186  142411 cri.go:89] found id: ""
	I0420 01:34:20.176219  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.176230  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:34:20.176235  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:34:20.176295  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:34:20.214895  142411 cri.go:89] found id: ""
	I0420 01:34:20.214932  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.214944  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:34:20.214951  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:34:20.215018  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:34:20.257759  142411 cri.go:89] found id: ""
	I0420 01:34:20.257786  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.257795  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:34:20.257800  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:34:20.257857  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:34:20.298111  142411 cri.go:89] found id: ""
	I0420 01:34:20.298153  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.298164  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:34:20.298172  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:34:20.298226  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:34:20.333435  142411 cri.go:89] found id: ""
	I0420 01:34:20.333469  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.333481  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:34:20.333489  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:34:20.333554  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:34:20.370848  142411 cri.go:89] found id: ""
	I0420 01:34:20.370872  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.370880  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:34:20.370890  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:34:20.370902  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:34:20.425495  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:34:20.425536  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:34:20.442039  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:34:20.442066  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:34:20.523456  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:34:20.523483  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:34:20.523504  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:34:20.633387  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:34:20.633427  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0420 01:34:20.688731  142411 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0420 01:34:20.688783  142411 out.go:239] * 
	W0420 01:34:20.688839  142411 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:34:20.688862  142411 out.go:239] * 
	W0420 01:34:20.689758  142411 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0420 01:34:20.693376  142411 out.go:177] 
	W0420 01:34:20.694909  142411 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:34:20.694971  142411 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0420 01:34:20.695003  142411 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0420 01:34:20.696409  142411 out.go:177] 
	
	
	==> CRI-O <==
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.763217264Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:78ee53c8b79120f528279f8634a895830faba01476e27cd5b3d11b4941668772,Verbose:false,}" file="otel-collector/interceptors.go:62" id=52858a42-274e-4c6a-90d7-650ff1b6b469 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.763428908Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:78ee53c8b79120f528279f8634a895830faba01476e27cd5b3d11b4941668772,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1713576623862448063,StartedAt:1713576623977276125,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.30.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fcee6681b164edc8892802779b78785,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/2fcee6681b164edc8892802779b78785/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/2fcee6681b164edc8892802779b78785/containers/kube-controller-manager/e5ceca15,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagati
on:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-default-k8s-diff-port-907988_2fcee6681b164edc8892802779b78785/kube-controller-manager/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,Oom
ScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=52858a42-274e-4c6a-90d7-650ff1b6b469 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.789683542Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b942191a-05e6-4cfc-8f4e-431fa90609be name=/runtime.v1.RuntimeService/Version
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.789766873Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b942191a-05e6-4cfc-8f4e-431fa90609be name=/runtime.v1.RuntimeService/Version
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.791624347Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=72665cbc-6b31-44ea-b55a-dc282d72492f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.792333793Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=adb30124-d8e6-43a1-b35f-394309148732 name=/runtime.v1.RuntimeService/Status
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.792383369Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=adb30124-d8e6-43a1-b35f-394309148732 name=/runtime.v1.RuntimeService/Status
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.792752070Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577189792732314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72665cbc-6b31-44ea-b55a-dc282d72492f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.793883424Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16333e8e-25f6-4526-aede-9437d95a9db2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.793999567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16333e8e-25f6-4526-aede-9437d95a9db2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.794208983Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:202ece012609f9d48bbeb0e85472fbe2a0b2b772ec62432f396f557d5dd946ef,PodSandboxId:d9514b8bf59030a3f2b9706716cb9a3a1e48b9b068137809131bb1ada06fc8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576645381855211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739478ce-5d74-4be0-8a39-d80245d8aa8a,},Annotations:map[string]string{io.kubernetes.container.hash: c4733f46,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9abbd634e052b9e52c32611fc47408d8fe7ee896d9ecd04d9cbf3aef12eccf57,PodSandboxId:8cf044dbd658fe3cc4049c9de56d760626ffee3cc09f9f48f6638101308a297d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576644812715862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p8dhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf589b6-f54b-4615-b95e-b95c89766e24,},Annotations:map[string]string{io.kubernetes.container.hash: 3ae49b9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8de8e15844cc717e83e053d00d6750706a13a63ee6c20708fc464bfc0c40a13,PodSandboxId:cf148ff5f383eb44911d9e17767aeb975ebd0641d07cce69aaae193b007f6dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576644686616341,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g2nzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d07ba546-0251-4862-ad1b-0c3d5ee7b1f3,},Annotations:map[string]string{io.kubernetes.container.hash: 75743cdc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1029b8d280d260e9ea03ba7dc34c5eb98fb8165005ba1384a6203dd82f91778,PodSandboxId:e1ff724454b3afe3852debf0a832c594c960a5a1f646ccaf1c32067d45c4f730,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713576643920587325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jt8wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ddf3ce-29f8-437d-bd31-89411c135012,},Annotations:map[string]string{io.kubernetes.container.hash: e3f6992f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3493c7f700417b4ac3a7012509af242191227e54e49a554081b5241815cd3348,PodSandboxId:848d25ed966118e3bd883b5a1bf8da4f8d784a733617553fec751ab79c816a85,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576623892130394,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0145c53c6d1d18df04cacd509389f3d8,},Annotations:map[string]string{io.kubernetes.container.hash: f7ec0c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61125bc317fa0d6fca2fcda5eb460565c80f64678312eed04da29b68611a9d7c,PodSandboxId:75b26293d3b581136cd8bab406ee66a9d9cff38b5938dc653dda92234053a82e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576623896231637,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379168c771e6417e18a246073d15f9b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd475dac8ad321c108d5e9229490f0dd02ddb9bd9b62f9eb94bfeefb2601b63,PodSandboxId:903e1234c68a3e116e214e544c3601b2d60ac9994a398f5434a019a336e9a1fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576623814718254,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961738bc673ea9c61e235980dd98ebef,},Annotations:map[string]string{io.kubernetes.container.hash: 3d86dfc3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ee53c8b79120f528279f8634a895830faba01476e27cd5b3d11b4941668772,PodSandboxId:451d2b570db13613edc36b85d7b695c2595739d7dc86a707da669b193982796f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576623784375326,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fcee6681b164edc8892802779b78785,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1e6d10e5f7a185afc4d60b1f18df284304bf292915c0fb179a33d7c7488a0a,PodSandboxId:e27cf517fa17d426db075bcc8002d45661d4fe411604f02c466eb6aae5d01fbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713576333481660906,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961738bc673ea9c61e235980dd98ebef,},Annotations:map[string]string{io.kubernetes.container.hash: 3d86dfc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16333e8e-25f6-4526-aede-9437d95a9db2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.841798113Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=484c9e61-c2af-45dd-af33-2bbb81dd5208 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.841883010Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=484c9e61-c2af-45dd-af33-2bbb81dd5208 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.843089763Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb754fc7-0a5e-41bd-8c0e-e8d5de9898e5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.843797896Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577189843769872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb754fc7-0a5e-41bd-8c0e-e8d5de9898e5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.844307603Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35d2bba8-d6a5-4038-bb01-ba7d842d5516 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.844358501Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35d2bba8-d6a5-4038-bb01-ba7d842d5516 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.844686083Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:202ece012609f9d48bbeb0e85472fbe2a0b2b772ec62432f396f557d5dd946ef,PodSandboxId:d9514b8bf59030a3f2b9706716cb9a3a1e48b9b068137809131bb1ada06fc8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576645381855211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739478ce-5d74-4be0-8a39-d80245d8aa8a,},Annotations:map[string]string{io.kubernetes.container.hash: c4733f46,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9abbd634e052b9e52c32611fc47408d8fe7ee896d9ecd04d9cbf3aef12eccf57,PodSandboxId:8cf044dbd658fe3cc4049c9de56d760626ffee3cc09f9f48f6638101308a297d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576644812715862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p8dhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf589b6-f54b-4615-b95e-b95c89766e24,},Annotations:map[string]string{io.kubernetes.container.hash: 3ae49b9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8de8e15844cc717e83e053d00d6750706a13a63ee6c20708fc464bfc0c40a13,PodSandboxId:cf148ff5f383eb44911d9e17767aeb975ebd0641d07cce69aaae193b007f6dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576644686616341,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g2nzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d07ba546-0251-4862-ad1b-0c3d5ee7b1f3,},Annotations:map[string]string{io.kubernetes.container.hash: 75743cdc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1029b8d280d260e9ea03ba7dc34c5eb98fb8165005ba1384a6203dd82f91778,PodSandboxId:e1ff724454b3afe3852debf0a832c594c960a5a1f646ccaf1c32067d45c4f730,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713576643920587325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jt8wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ddf3ce-29f8-437d-bd31-89411c135012,},Annotations:map[string]string{io.kubernetes.container.hash: e3f6992f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3493c7f700417b4ac3a7012509af242191227e54e49a554081b5241815cd3348,PodSandboxId:848d25ed966118e3bd883b5a1bf8da4f8d784a733617553fec751ab79c816a85,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576623892130394,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0145c53c6d1d18df04cacd509389f3d8,},Annotations:map[string]string{io.kubernetes.container.hash: f7ec0c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61125bc317fa0d6fca2fcda5eb460565c80f64678312eed04da29b68611a9d7c,PodSandboxId:75b26293d3b581136cd8bab406ee66a9d9cff38b5938dc653dda92234053a82e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576623896231637,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379168c771e6417e18a246073d15f9b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd475dac8ad321c108d5e9229490f0dd02ddb9bd9b62f9eb94bfeefb2601b63,PodSandboxId:903e1234c68a3e116e214e544c3601b2d60ac9994a398f5434a019a336e9a1fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576623814718254,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961738bc673ea9c61e235980dd98ebef,},Annotations:map[string]string{io.kubernetes.container.hash: 3d86dfc3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ee53c8b79120f528279f8634a895830faba01476e27cd5b3d11b4941668772,PodSandboxId:451d2b570db13613edc36b85d7b695c2595739d7dc86a707da669b193982796f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576623784375326,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fcee6681b164edc8892802779b78785,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1e6d10e5f7a185afc4d60b1f18df284304bf292915c0fb179a33d7c7488a0a,PodSandboxId:e27cf517fa17d426db075bcc8002d45661d4fe411604f02c466eb6aae5d01fbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713576333481660906,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961738bc673ea9c61e235980dd98ebef,},Annotations:map[string]string{io.kubernetes.container.hash: 3d86dfc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=35d2bba8-d6a5-4038-bb01-ba7d842d5516 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.880321950Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=88a864fe-b31d-4683-88d0-3ffb80896e84 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.880389117Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=88a864fe-b31d-4683-88d0-3ffb80896e84 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.882258883Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b688b471-4022-41cf-8aa3-0c1a3fe516a1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.882640991Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577189882620854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b688b471-4022-41cf-8aa3-0c1a3fe516a1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.883612955Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e100f13-0d7f-402c-9c69-52db49724c09 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.883666368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e100f13-0d7f-402c-9c69-52db49724c09 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:39:49 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:39:49.883857608Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:202ece012609f9d48bbeb0e85472fbe2a0b2b772ec62432f396f557d5dd946ef,PodSandboxId:d9514b8bf59030a3f2b9706716cb9a3a1e48b9b068137809131bb1ada06fc8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576645381855211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739478ce-5d74-4be0-8a39-d80245d8aa8a,},Annotations:map[string]string{io.kubernetes.container.hash: c4733f46,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9abbd634e052b9e52c32611fc47408d8fe7ee896d9ecd04d9cbf3aef12eccf57,PodSandboxId:8cf044dbd658fe3cc4049c9de56d760626ffee3cc09f9f48f6638101308a297d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576644812715862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p8dhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf589b6-f54b-4615-b95e-b95c89766e24,},Annotations:map[string]string{io.kubernetes.container.hash: 3ae49b9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8de8e15844cc717e83e053d00d6750706a13a63ee6c20708fc464bfc0c40a13,PodSandboxId:cf148ff5f383eb44911d9e17767aeb975ebd0641d07cce69aaae193b007f6dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576644686616341,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g2nzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d07ba546-0251-4862-ad1b-0c3d5ee7b1f3,},Annotations:map[string]string{io.kubernetes.container.hash: 75743cdc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1029b8d280d260e9ea03ba7dc34c5eb98fb8165005ba1384a6203dd82f91778,PodSandboxId:e1ff724454b3afe3852debf0a832c594c960a5a1f646ccaf1c32067d45c4f730,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713576643920587325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jt8wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ddf3ce-29f8-437d-bd31-89411c135012,},Annotations:map[string]string{io.kubernetes.container.hash: e3f6992f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3493c7f700417b4ac3a7012509af242191227e54e49a554081b5241815cd3348,PodSandboxId:848d25ed966118e3bd883b5a1bf8da4f8d784a733617553fec751ab79c816a85,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576623892130394,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0145c53c6d1d18df04cacd509389f3d8,},Annotations:map[string]string{io.kubernetes.container.hash: f7ec0c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61125bc317fa0d6fca2fcda5eb460565c80f64678312eed04da29b68611a9d7c,PodSandboxId:75b26293d3b581136cd8bab406ee66a9d9cff38b5938dc653dda92234053a82e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576623896231637,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379168c771e6417e18a246073d15f9b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd475dac8ad321c108d5e9229490f0dd02ddb9bd9b62f9eb94bfeefb2601b63,PodSandboxId:903e1234c68a3e116e214e544c3601b2d60ac9994a398f5434a019a336e9a1fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576623814718254,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961738bc673ea9c61e235980dd98ebef,},Annotations:map[string]string{io.kubernetes.container.hash: 3d86dfc3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ee53c8b79120f528279f8634a895830faba01476e27cd5b3d11b4941668772,PodSandboxId:451d2b570db13613edc36b85d7b695c2595739d7dc86a707da669b193982796f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576623784375326,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fcee6681b164edc8892802779b78785,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1e6d10e5f7a185afc4d60b1f18df284304bf292915c0fb179a33d7c7488a0a,PodSandboxId:e27cf517fa17d426db075bcc8002d45661d4fe411604f02c466eb6aae5d01fbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713576333481660906,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961738bc673ea9c61e235980dd98ebef,},Annotations:map[string]string{io.kubernetes.container.hash: 3d86dfc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e100f13-0d7f-402c-9c69-52db49724c09 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	202ece012609f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   d9514b8bf5903       storage-provisioner
	9abbd634e052b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   8cf044dbd658f       coredns-7db6d8ff4d-p8dhp
	c8de8e15844cc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   cf148ff5f383e       coredns-7db6d8ff4d-g2nzn
	b1029b8d280d2       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   9 minutes ago       Running             kube-proxy                0                   e1ff724454b3a       kube-proxy-jt8wr
	61125bc317fa0       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   9 minutes ago       Running             kube-scheduler            2                   75b26293d3b58       kube-scheduler-default-k8s-diff-port-907988
	3493c7f700417       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   848d25ed96611       etcd-default-k8s-diff-port-907988
	2dd475dac8ad3       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   9 minutes ago       Running             kube-apiserver            2                   903e1234c68a3       kube-apiserver-default-k8s-diff-port-907988
	78ee53c8b7912       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   9 minutes ago       Running             kube-controller-manager   2                   451d2b570db13       kube-controller-manager-default-k8s-diff-port-907988
	eb1e6d10e5f7a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   14 minutes ago      Exited              kube-apiserver            1                   e27cf517fa17d       kube-apiserver-default-k8s-diff-port-907988
	
	
	==> coredns [9abbd634e052b9e52c32611fc47408d8fe7ee896d9ecd04d9cbf3aef12eccf57] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [c8de8e15844cc717e83e053d00d6750706a13a63ee6c20708fc464bfc0c40a13] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-907988
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-907988
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=default-k8s-diff-port-907988
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T01_30_30_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 01:30:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-907988
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:39:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 01:35:56 +0000   Sat, 20 Apr 2024 01:30:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 01:35:56 +0000   Sat, 20 Apr 2024 01:30:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 01:35:56 +0000   Sat, 20 Apr 2024 01:30:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 01:35:56 +0000   Sat, 20 Apr 2024 01:30:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    default-k8s-diff-port-907988
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 95948a85dd4149018df21ee92061e8a2
	  System UUID:                95948a85-dd41-4901-8df2-1ee92061e8a2
	  Boot ID:                    bde3fa4b-3c5f-4fd7-ae40-27bd8d3743bf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-g2nzn                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-p8dhp                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-default-k8s-diff-port-907988                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-907988             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-907988    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-jt8wr                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-default-k8s-diff-port-907988             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-569cc877fc-6rgpj                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  Starting                 9m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node default-k8s-diff-port-907988 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node default-k8s-diff-port-907988 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node default-k8s-diff-port-907988 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node default-k8s-diff-port-907988 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node default-k8s-diff-port-907988 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node default-k8s-diff-port-907988 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s                   node-controller  Node default-k8s-diff-port-907988 event: Registered Node default-k8s-diff-port-907988 in Controller
	
	
	==> dmesg <==
	[  +0.042816] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.608699] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.251727] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.691158] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.279555] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.060309] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073536] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.192400] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.139925] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.309630] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[  +5.094941] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +0.063921] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.788924] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +5.575309] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.371245] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.969603] kauditd_printk_skb: 27 callbacks suppressed
	[Apr20 01:30] systemd-fstab-generator[3603]: Ignoring "noauto" option for root device
	[  +0.066581] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.998341] systemd-fstab-generator[3928]: Ignoring "noauto" option for root device
	[  +0.082095] kauditd_printk_skb: 54 callbacks suppressed
	[ +13.906241] systemd-fstab-generator[4126]: Ignoring "noauto" option for root device
	[  +0.107883] kauditd_printk_skb: 12 callbacks suppressed
	[Apr20 01:31] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [3493c7f700417b4ac3a7012509af242191227e54e49a554081b5241815cd3348] <==
	{"level":"info","ts":"2024-04-20T01:30:24.387485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 switched to configuration voters=(15611694107784645026)"}
	{"level":"info","ts":"2024-04-20T01:30:24.387849Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"26257d506d5fabfb","local-member-id":"d8a7e113a49009a2","added-peer-id":"d8a7e113a49009a2","added-peer-peer-urls":["https://192.168.39.222:2380"]}
	{"level":"info","ts":"2024-04-20T01:30:24.412054Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-20T01:30:24.412292Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d8a7e113a49009a2","initial-advertise-peer-urls":["https://192.168.39.222:2380"],"listen-peer-urls":["https://192.168.39.222:2380"],"advertise-client-urls":["https://192.168.39.222:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.222:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-20T01:30:24.412346Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-20T01:30:24.412437Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2024-04-20T01:30:24.412476Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2024-04-20T01:30:24.926022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-20T01:30:24.926121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-20T01:30:24.926172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 received MsgPreVoteResp from d8a7e113a49009a2 at term 1"}
	{"level":"info","ts":"2024-04-20T01:30:24.926203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became candidate at term 2"}
	{"level":"info","ts":"2024-04-20T01:30:24.926228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 received MsgVoteResp from d8a7e113a49009a2 at term 2"}
	{"level":"info","ts":"2024-04-20T01:30:24.926255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became leader at term 2"}
	{"level":"info","ts":"2024-04-20T01:30:24.926281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d8a7e113a49009a2 elected leader d8a7e113a49009a2 at term 2"}
	{"level":"info","ts":"2024-04-20T01:30:24.930169Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d8a7e113a49009a2","local-member-attributes":"{Name:default-k8s-diff-port-907988 ClientURLs:[https://192.168.39.222:2379]}","request-path":"/0/members/d8a7e113a49009a2/attributes","cluster-id":"26257d506d5fabfb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-20T01:30:24.931879Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:30:24.932112Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:30:24.932954Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-20T01:30:24.932994Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T01:30:24.933032Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:30:24.944133Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"26257d506d5fabfb","local-member-id":"d8a7e113a49009a2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:30:24.944236Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:30:24.944274Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:30:24.945684Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.222:2379"}
	{"level":"info","ts":"2024-04-20T01:30:24.94928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 01:39:50 up 14 min,  0 users,  load average: 1.26, 0.59, 0.31
	Linux default-k8s-diff-port-907988 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2dd475dac8ad321c108d5e9229490f0dd02ddb9bd9b62f9eb94bfeefb2601b63] <==
	I0420 01:33:45.855191       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:35:26.963384       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:35:26.963851       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0420 01:35:27.964676       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:35:27.964839       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 01:35:27.964869       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:35:27.965138       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:35:27.965325       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0420 01:35:27.966560       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:36:27.965417       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:36:27.965570       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 01:36:27.965585       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:36:27.966843       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:36:27.966974       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0420 01:36:27.966985       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:38:27.966122       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:38:27.966292       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 01:38:27.966302       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:38:27.967572       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:38:27.967629       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0420 01:38:27.967639       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [eb1e6d10e5f7a185afc4d60b1f18df284304bf292915c0fb179a33d7c7488a0a] <==
	W0420 01:30:19.402539       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.468479       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.483315       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.530840       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.551630       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.630519       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.651412       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.684536       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.709222       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.774495       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.787015       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.823519       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.964687       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.980178       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.146537       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.174306       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.178287       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.235674       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.260543       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.449749       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.493185       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.526561       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.622271       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.625230       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.672846       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [78ee53c8b79120f528279f8634a895830faba01476e27cd5b3d11b4941668772] <==
	I0420 01:34:20.552321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="157.501µs"
	E0420 01:34:42.457824       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:34:42.907775       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:35:12.463712       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:35:12.917561       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:35:42.469216       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:35:42.927424       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:36:12.476278       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:36:12.936438       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:36:42.481489       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:36:42.947066       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0420 01:37:00.552778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="264.651µs"
	E0420 01:37:12.487024       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:37:12.551103       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="206.714µs"
	I0420 01:37:12.957247       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:37:42.492546       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:37:42.966258       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:38:12.497633       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:38:12.976210       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:38:42.504443       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:38:42.986991       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:39:12.513439       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:39:12.997174       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:39:42.520637       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:39:43.005684       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b1029b8d280d260e9ea03ba7dc34c5eb98fb8165005ba1384a6203dd82f91778] <==
	I0420 01:30:44.313172       1 server_linux.go:69] "Using iptables proxy"
	I0420 01:30:44.329303       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.222"]
	I0420 01:30:44.445267       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 01:30:44.445304       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 01:30:44.445317       1 server_linux.go:165] "Using iptables Proxier"
	I0420 01:30:44.477802       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 01:30:44.478010       1 server.go:872] "Version info" version="v1.30.0"
	I0420 01:30:44.478025       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:30:44.533362       1 config.go:192] "Starting service config controller"
	I0420 01:30:44.533405       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 01:30:44.533440       1 config.go:101] "Starting endpoint slice config controller"
	I0420 01:30:44.533445       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 01:30:44.534148       1 config.go:319] "Starting node config controller"
	I0420 01:30:44.534156       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 01:30:44.635546       1 shared_informer.go:320] Caches are synced for node config
	I0420 01:30:44.635570       1 shared_informer.go:320] Caches are synced for service config
	I0420 01:30:44.635594       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [61125bc317fa0d6fca2fcda5eb460565c80f64678312eed04da29b68611a9d7c] <==
	W0420 01:30:27.027816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0420 01:30:27.028431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0420 01:30:27.027622       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 01:30:27.028496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0420 01:30:27.843272       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0420 01:30:27.843466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0420 01:30:27.843306       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 01:30:27.843547       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 01:30:27.880197       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 01:30:27.880282       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 01:30:27.961732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0420 01:30:27.961848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0420 01:30:27.963493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 01:30:27.963556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0420 01:30:28.148353       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0420 01:30:28.148896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0420 01:30:28.155485       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 01:30:28.155688       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 01:30:28.199829       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 01:30:28.199999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 01:30:28.299553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0420 01:30:28.299749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0420 01:30:28.310718       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0420 01:30:28.310865       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0420 01:30:30.809633       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 20 01:37:29 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:37:29.562609    3935 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:37:29 default-k8s-diff-port-907988 kubelet[3935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:37:29 default-k8s-diff-port-907988 kubelet[3935]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:37:29 default-k8s-diff-port-907988 kubelet[3935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:37:29 default-k8s-diff-port-907988 kubelet[3935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:37:37 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:37:37.536779    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:37:52 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:37:52.535237    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:38:07 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:38:07.537170    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:38:22 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:38:22.535433    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:38:29 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:38:29.560253    3935 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:38:29 default-k8s-diff-port-907988 kubelet[3935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:38:29 default-k8s-diff-port-907988 kubelet[3935]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:38:29 default-k8s-diff-port-907988 kubelet[3935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:38:29 default-k8s-diff-port-907988 kubelet[3935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:38:33 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:38:33.536591    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:38:47 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:38:47.537281    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:39:00 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:39:00.534989    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:39:13 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:39:13.535559    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:39:28 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:39:28.535743    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:39:29 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:39:29.557512    3935 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:39:29 default-k8s-diff-port-907988 kubelet[3935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:39:29 default-k8s-diff-port-907988 kubelet[3935]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:39:29 default-k8s-diff-port-907988 kubelet[3935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:39:29 default-k8s-diff-port-907988 kubelet[3935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:39:39 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:39:39.538158    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	
	
	==> storage-provisioner [202ece012609f9d48bbeb0e85472fbe2a0b2b772ec62432f396f557d5dd946ef] <==
	I0420 01:30:45.549330       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0420 01:30:45.561699       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0420 01:30:45.562894       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0420 01:30:45.577575       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0420 01:30:45.578027       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-907988_b7fd2827-c4c4-4970-bf24-b6ff22e80e25!
	I0420 01:30:45.578508       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"78244ffc-cc6f-4be5-807c-3078b98a5438", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-907988_b7fd2827-c4c4-4970-bf24-b6ff22e80e25 became leader
	I0420 01:30:45.678705       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-907988_b7fd2827-c4c4-4970-bf24-b6ff22e80e25!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-907988 -n default-k8s-diff-port-907988
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-907988 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-6rgpj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-907988 describe pod metrics-server-569cc877fc-6rgpj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-907988 describe pod metrics-server-569cc877fc-6rgpj: exit status 1 (63.780035ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-6rgpj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-907988 describe pod metrics-server-569cc877fc-6rgpj: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0420 01:31:30.107090   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/auto-831611/client.crt: no such file or directory
E0420 01:31:54.410140   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-269507 -n embed-certs-269507
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-20 01:40:14.782840877 +0000 UTC m=+6188.297708255
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-269507 -n embed-certs-269507
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-269507 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-269507 logs -n 25: (2.121830633s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-831611                               | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-831611                               | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-172352 | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | disable-driver-mounts-172352                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:17 UTC |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-338118             | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:17 UTC | 20 Apr 24 01:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-907988  | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC | 20 Apr 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC |                     |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-269507            | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC | 20 Apr 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-564860        | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-338118                  | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-907988       | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:30 UTC |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-269507                 | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-564860             | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 01:21:33
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 01:21:33.400343  142411 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:21:33.400444  142411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:21:33.400452  142411 out.go:304] Setting ErrFile to fd 2...
	I0420 01:21:33.400464  142411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:21:33.400681  142411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:21:33.401213  142411 out.go:298] Setting JSON to false
	I0420 01:21:33.402151  142411 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14640,"bootTime":1713561453,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 01:21:33.402214  142411 start.go:139] virtualization: kvm guest
	I0420 01:21:33.404200  142411 out.go:177] * [old-k8s-version-564860] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 01:21:33.405933  142411 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:21:33.407240  142411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:21:33.405946  142411 notify.go:220] Checking for updates...
	I0420 01:21:33.408693  142411 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:21:33.409906  142411 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:21:33.411155  142411 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 01:21:33.412528  142411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:21:33.414062  142411 config.go:182] Loaded profile config "old-k8s-version-564860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:21:33.414460  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:21:33.414524  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:21:33.428987  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37585
	I0420 01:21:33.429348  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:21:33.429850  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:21:33.429873  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:21:33.430178  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:21:33.430370  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:21:33.431825  142411 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0420 01:21:33.432895  142411 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:21:33.433209  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:21:33.433251  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:21:33.447157  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42815
	I0420 01:21:33.447543  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:21:33.448080  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:21:33.448123  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:21:33.448444  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:21:33.448609  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:21:33.481664  142411 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 01:21:33.482784  142411 start.go:297] selected driver: kvm2
	I0420 01:21:33.482796  142411 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-5
64860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:21:33.482903  142411 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:21:33.483572  142411 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:21:33.483646  142411 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 01:21:33.497421  142411 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 01:21:33.497790  142411 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:21:33.497854  142411 cni.go:84] Creating CNI manager for ""
	I0420 01:21:33.497869  142411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:21:33.497915  142411 start.go:340] cluster config:
	{Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:21:33.498027  142411 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:21:33.499624  142411 out.go:177] * Starting "old-k8s-version-564860" primary control-plane node in "old-k8s-version-564860" cluster
	I0420 01:21:33.500874  142411 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:21:33.500901  142411 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0420 01:21:33.500914  142411 cache.go:56] Caching tarball of preloaded images
	I0420 01:21:33.500992  142411 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 01:21:33.501007  142411 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0420 01:21:33.501116  142411 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/config.json ...
	I0420 01:21:33.501613  142411 start.go:360] acquireMachinesLock for old-k8s-version-564860: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:21:35.817529  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:38.889617  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:44.969590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:48.041555  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:54.121550  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:57.193604  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:03.273575  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:06.345487  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:12.425567  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:15.497538  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:21.577563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:24.649534  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:30.729573  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:33.801566  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:39.881590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:42.953591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:49.033641  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:52.105579  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:58.185591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:01.257655  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:07.337585  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:10.409568  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:16.489562  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:19.561602  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:25.641579  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:28.713581  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:34.793618  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:37.865643  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:43.945593  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:47.017561  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:53.097597  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:56.169538  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:02.249561  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:05.321557  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:11.401563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:14.473539  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:20.553591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:23.625573  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:29.705563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:32.777590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:38.857568  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:41.929619  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:48.009565  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:51.081536  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:57.161593  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:25:00.233633  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:25:03.237801  141927 start.go:364] duration metric: took 4m24.096402827s to acquireMachinesLock for "default-k8s-diff-port-907988"
	I0420 01:25:03.237873  141927 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:03.237883  141927 fix.go:54] fixHost starting: 
	I0420 01:25:03.238412  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:03.238453  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:03.254029  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36295
	I0420 01:25:03.254570  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:03.255071  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:25:03.255097  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:03.255474  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:03.255703  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:03.255871  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:25:03.257395  141927 fix.go:112] recreateIfNeeded on default-k8s-diff-port-907988: state=Stopped err=<nil>
	I0420 01:25:03.257430  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	W0420 01:25:03.257577  141927 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:03.259083  141927 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-907988" ...
	I0420 01:25:03.260199  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Start
	I0420 01:25:03.260402  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring networks are active...
	I0420 01:25:03.261176  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring network default is active
	I0420 01:25:03.261553  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring network mk-default-k8s-diff-port-907988 is active
	I0420 01:25:03.262016  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Getting domain xml...
	I0420 01:25:03.262834  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Creating domain...
	I0420 01:25:03.235208  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:03.235275  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:25:03.235620  141746 buildroot.go:166] provisioning hostname "no-preload-338118"
	I0420 01:25:03.235653  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:25:03.235902  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:25:03.237636  141746 machine.go:97] duration metric: took 4m37.412949021s to provisionDockerMachine
	I0420 01:25:03.237677  141746 fix.go:56] duration metric: took 4m37.433896084s for fixHost
	I0420 01:25:03.237685  141746 start.go:83] releasing machines lock for "no-preload-338118", held for 4m37.433927307s
	W0420 01:25:03.237715  141746 start.go:713] error starting host: provision: host is not running
	W0420 01:25:03.237980  141746 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0420 01:25:03.238076  141746 start.go:728] Will try again in 5 seconds ...
	I0420 01:25:04.453535  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting to get IP...
	I0420 01:25:04.454427  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.454803  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.454886  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.454785  143129 retry.go:31] will retry after 205.593849ms: waiting for machine to come up
	I0420 01:25:04.662560  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.663106  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.663133  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.663007  143129 retry.go:31] will retry after 246.821866ms: waiting for machine to come up
	I0420 01:25:04.911578  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.912067  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.912100  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.912014  143129 retry.go:31] will retry after 478.36287ms: waiting for machine to come up
	I0420 01:25:05.391624  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.392018  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.392063  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:05.391965  143129 retry.go:31] will retry after 495.387005ms: waiting for machine to come up
	I0420 01:25:05.888569  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.889093  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.889116  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:05.889009  143129 retry.go:31] will retry after 721.867239ms: waiting for machine to come up
	I0420 01:25:06.613018  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:06.613550  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:06.613583  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:06.613495  143129 retry.go:31] will retry after 724.502229ms: waiting for machine to come up
	I0420 01:25:07.339473  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:07.339924  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:07.339974  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:07.339883  143129 retry.go:31] will retry after 916.936196ms: waiting for machine to come up
	I0420 01:25:08.258657  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:08.259033  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:08.259064  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:08.258981  143129 retry.go:31] will retry after 1.088675043s: waiting for machine to come up
	I0420 01:25:08.239597  141746 start.go:360] acquireMachinesLock for no-preload-338118: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:25:09.349021  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:09.349421  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:09.349453  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:09.349362  143129 retry.go:31] will retry after 1.139610002s: waiting for machine to come up
	I0420 01:25:10.490715  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:10.491162  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:10.491190  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:10.491119  143129 retry.go:31] will retry after 1.625829976s: waiting for machine to come up
	I0420 01:25:12.118751  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:12.119231  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:12.119254  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:12.119184  143129 retry.go:31] will retry after 2.904309002s: waiting for machine to come up
	I0420 01:25:15.025713  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:15.026281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:15.026310  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:15.026227  143129 retry.go:31] will retry after 3.471792967s: waiting for machine to come up
	I0420 01:25:18.500247  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:18.500626  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:18.500679  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:18.500595  143129 retry.go:31] will retry after 4.499766051s: waiting for machine to come up
	I0420 01:25:23.005446  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.005935  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Found IP for machine: 192.168.39.222
	I0420 01:25:23.005956  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Reserving static IP address...
	I0420 01:25:23.005970  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has current primary IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.006453  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-907988", mac: "52:54:00:c7:22:6d", ip: "192.168.39.222"} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.006479  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Reserved static IP address: 192.168.39.222
	I0420 01:25:23.006513  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | skip adding static IP to network mk-default-k8s-diff-port-907988 - found existing host DHCP lease matching {name: "default-k8s-diff-port-907988", mac: "52:54:00:c7:22:6d", ip: "192.168.39.222"}
	I0420 01:25:23.006537  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for SSH to be available...
	I0420 01:25:23.006544  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Getting to WaitForSSH function...
	I0420 01:25:23.009090  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.009505  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.009537  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.009658  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Using SSH client type: external
	I0420 01:25:23.009695  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa (-rw-------)
	I0420 01:25:23.009732  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:25:23.009748  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | About to run SSH command:
	I0420 01:25:23.009766  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | exit 0
	I0420 01:25:23.133489  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | SSH cmd err, output: <nil>: 
	I0420 01:25:23.133940  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetConfigRaw
	I0420 01:25:23.134589  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:23.137340  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.137685  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.137708  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.138000  141927 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/config.json ...
	I0420 01:25:23.138228  141927 machine.go:94] provisionDockerMachine start ...
	I0420 01:25:23.138253  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:23.138461  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.140536  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.140815  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.140841  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.141024  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.141244  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.141450  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.141595  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.141777  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.142053  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.142067  141927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:25:23.249946  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:25:23.249979  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.250250  141927 buildroot.go:166] provisioning hostname "default-k8s-diff-port-907988"
	I0420 01:25:23.250280  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.250483  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.253030  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.253422  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.253456  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.253564  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.253755  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.253978  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.254135  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.254334  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.254504  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.254517  141927 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-907988 && echo "default-k8s-diff-port-907988" | sudo tee /etc/hostname
	I0420 01:25:23.379061  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-907988
	
	I0420 01:25:23.379092  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.381893  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.382249  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.382278  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.382465  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.382666  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.382831  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.382939  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.383118  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.383324  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.383349  141927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-907988' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-907988/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-907988' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:25:23.499869  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:23.499903  141927 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:25:23.499932  141927 buildroot.go:174] setting up certificates
	I0420 01:25:23.499941  141927 provision.go:84] configureAuth start
	I0420 01:25:23.499950  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.500178  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:23.502735  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.503050  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.503085  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.503201  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.505586  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.505924  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.505968  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.506036  141927 provision.go:143] copyHostCerts
	I0420 01:25:23.506136  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:25:23.506150  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:25:23.506233  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:25:23.506386  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:25:23.506396  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:25:23.506444  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:25:23.506525  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:25:23.506536  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:25:23.506569  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:25:23.506640  141927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-907988 san=[127.0.0.1 192.168.39.222 default-k8s-diff-port-907988 localhost minikube]
	I0420 01:25:23.598855  141927 provision.go:177] copyRemoteCerts
	I0420 01:25:23.598930  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:25:23.598967  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.602183  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.602516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.602544  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.602696  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.602903  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.603143  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.603301  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:23.688294  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:25:23.714719  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0420 01:25:23.744530  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:25:23.774733  141927 provision.go:87] duration metric: took 274.778779ms to configureAuth
	I0420 01:25:23.774756  141927 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:25:23.774990  141927 config.go:182] Loaded profile config "default-k8s-diff-port-907988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:25:23.775083  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.777817  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.778179  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.778213  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.778376  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.778596  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.778763  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.778984  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.779167  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.779364  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.779393  141927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:25:24.314463  142057 start.go:364] duration metric: took 4m32.915907541s to acquireMachinesLock for "embed-certs-269507"
	I0420 01:25:24.314618  142057 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:24.314645  142057 fix.go:54] fixHost starting: 
	I0420 01:25:24.315169  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:24.315220  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:24.331820  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43949
	I0420 01:25:24.332243  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:24.332707  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:25:24.332730  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:24.333157  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:24.333371  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:24.333551  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:25:24.335004  142057 fix.go:112] recreateIfNeeded on embed-certs-269507: state=Stopped err=<nil>
	I0420 01:25:24.335044  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	W0420 01:25:24.335211  142057 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:24.337246  142057 out.go:177] * Restarting existing kvm2 VM for "embed-certs-269507" ...
	I0420 01:25:24.056795  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:25:24.056832  141927 machine.go:97] duration metric: took 918.585863ms to provisionDockerMachine
	I0420 01:25:24.056849  141927 start.go:293] postStartSetup for "default-k8s-diff-port-907988" (driver="kvm2")
	I0420 01:25:24.056865  141927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:25:24.056889  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.057250  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:25:24.057281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.060602  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.060992  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.061028  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.061196  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.061422  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.061631  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.061785  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.152109  141927 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:25:24.157292  141927 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:25:24.157330  141927 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:25:24.157397  141927 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:25:24.157490  141927 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:25:24.157606  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:25:24.171039  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:24.201343  141927 start.go:296] duration metric: took 144.476748ms for postStartSetup
	I0420 01:25:24.201383  141927 fix.go:56] duration metric: took 20.963499628s for fixHost
	I0420 01:25:24.201409  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.204283  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.204648  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.204681  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.204842  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.205022  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.205204  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.205411  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.205732  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:24.206255  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:24.206269  141927 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:25:24.314311  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576324.296261493
	
	I0420 01:25:24.314336  141927 fix.go:216] guest clock: 1713576324.296261493
	I0420 01:25:24.314346  141927 fix.go:229] Guest: 2024-04-20 01:25:24.296261493 +0000 UTC Remote: 2024-04-20 01:25:24.201388226 +0000 UTC m=+285.207728057 (delta=94.873267ms)
	I0420 01:25:24.314373  141927 fix.go:200] guest clock delta is within tolerance: 94.873267ms
	I0420 01:25:24.314380  141927 start.go:83] releasing machines lock for "default-k8s-diff-port-907988", held for 21.076529311s
	I0420 01:25:24.314420  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.314699  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:24.317281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.317696  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.317731  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.317858  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318364  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318557  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318664  141927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:25:24.318723  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.318833  141927 ssh_runner.go:195] Run: cat /version.json
	I0420 01:25:24.318862  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.321519  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321572  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321937  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.321968  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321994  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.322011  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.322121  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.322233  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.322323  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.322502  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.322516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.322725  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.322730  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.322871  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.403742  141927 ssh_runner.go:195] Run: systemctl --version
	I0420 01:25:24.429207  141927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:25:24.590621  141927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:25:24.597818  141927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:25:24.597890  141927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:25:24.617031  141927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:25:24.617050  141927 start.go:494] detecting cgroup driver to use...
	I0420 01:25:24.617126  141927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:25:24.643134  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:25:24.658222  141927 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:25:24.658275  141927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:25:24.672409  141927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:25:24.686722  141927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:25:24.810871  141927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:25:24.965702  141927 docker.go:233] disabling docker service ...
	I0420 01:25:24.965765  141927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:25:24.984504  141927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:25:24.999580  141927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:25:25.151023  141927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:25:25.278443  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:25:25.295439  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:25:25.316425  141927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:25:25.316494  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.329052  141927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:25:25.329119  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.342102  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.354831  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.368084  141927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:25:25.380515  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.392952  141927 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.411707  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.423776  141927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:25:25.434175  141927 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:25:25.434234  141927 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:25:25.449180  141927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:25:25.460018  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:25.579669  141927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:25:25.741777  141927 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:25:25.741854  141927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:25:25.747422  141927 start.go:562] Will wait 60s for crictl version
	I0420 01:25:25.747478  141927 ssh_runner.go:195] Run: which crictl
	I0420 01:25:25.752164  141927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:25:25.800400  141927 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:25:25.800491  141927 ssh_runner.go:195] Run: crio --version
	I0420 01:25:25.832099  141927 ssh_runner.go:195] Run: crio --version
	I0420 01:25:25.865692  141927 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:25:24.338547  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Start
	I0420 01:25:24.338743  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring networks are active...
	I0420 01:25:24.339527  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring network default is active
	I0420 01:25:24.340064  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring network mk-embed-certs-269507 is active
	I0420 01:25:24.340520  142057 main.go:141] libmachine: (embed-certs-269507) Getting domain xml...
	I0420 01:25:24.341363  142057 main.go:141] libmachine: (embed-certs-269507) Creating domain...
	I0420 01:25:25.566725  142057 main.go:141] libmachine: (embed-certs-269507) Waiting to get IP...
	I0420 01:25:25.567704  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:25.568195  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:25.568263  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:25.568160  143271 retry.go:31] will retry after 229.672507ms: waiting for machine to come up
	I0420 01:25:25.799515  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:25.799964  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:25.799994  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:25.799916  143271 retry.go:31] will retry after 352.048372ms: waiting for machine to come up
	I0420 01:25:26.153710  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:26.154217  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:26.154245  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:26.154159  143271 retry.go:31] will retry after 451.404487ms: waiting for machine to come up
	I0420 01:25:25.867283  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:25.870225  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:25.870725  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:25.870748  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:25.871001  141927 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0420 01:25:25.875986  141927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:25.890923  141927 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-907988 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907
988 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:25:25.891043  141927 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:25:25.891088  141927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:25.934665  141927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:25:25.934743  141927 ssh_runner.go:195] Run: which lz4
	I0420 01:25:25.939157  141927 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:25:25.943759  141927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:25:25.943788  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:25:27.674416  141927 crio.go:462] duration metric: took 1.735279369s to copy over tarball
	I0420 01:25:27.674484  141927 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:25:26.607751  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:26.608327  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:26.608362  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:26.608273  143271 retry.go:31] will retry after 548.149542ms: waiting for machine to come up
	I0420 01:25:27.157746  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:27.158193  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:27.158220  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:27.158158  143271 retry.go:31] will retry after 543.066807ms: waiting for machine to come up
	I0420 01:25:27.702417  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:27.702812  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:27.702842  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:27.702778  143271 retry.go:31] will retry after 801.842999ms: waiting for machine to come up
	I0420 01:25:28.505673  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:28.506233  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:28.506264  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:28.506169  143271 retry.go:31] will retry after 1.176665861s: waiting for machine to come up
	I0420 01:25:29.684134  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:29.684642  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:29.684676  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:29.684582  143271 retry.go:31] will retry after 1.09397916s: waiting for machine to come up
	I0420 01:25:30.780467  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:30.780962  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:30.780987  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:30.780924  143271 retry.go:31] will retry after 1.560706704s: waiting for machine to come up
	I0420 01:25:30.280138  141927 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.605620888s)
	I0420 01:25:30.280235  141927 crio.go:469] duration metric: took 2.605784372s to extract the tarball
	I0420 01:25:30.280269  141927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:25:30.323590  141927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:30.384053  141927 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:25:30.384083  141927 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:25:30.384094  141927 kubeadm.go:928] updating node { 192.168.39.222 8444 v1.30.0 crio true true} ...
	I0420 01:25:30.384258  141927 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-907988 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907988 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:25:30.384347  141927 ssh_runner.go:195] Run: crio config
	I0420 01:25:30.431033  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:25:30.431059  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:30.431074  141927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:25:30.431094  141927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.222 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-907988 NodeName:default-k8s-diff-port-907988 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:25:30.431267  141927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.222
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-907988"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:25:30.431327  141927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:25:30.444735  141927 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:25:30.444807  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:25:30.457543  141927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0420 01:25:30.477858  141927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:25:30.497632  141927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0420 01:25:30.518062  141927 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I0420 01:25:30.522820  141927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:30.538677  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:30.686290  141927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:25:30.721316  141927 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988 for IP: 192.168.39.222
	I0420 01:25:30.721342  141927 certs.go:194] generating shared ca certs ...
	I0420 01:25:30.721373  141927 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:25:30.721607  141927 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:25:30.721664  141927 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:25:30.721679  141927 certs.go:256] generating profile certs ...
	I0420 01:25:30.721789  141927 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/client.key
	I0420 01:25:30.721873  141927 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.key.b8de10ae
	I0420 01:25:30.721912  141927 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.key
	I0420 01:25:30.722019  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:25:30.722052  141927 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:25:30.722067  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:25:30.722094  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:25:30.722122  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:25:30.722144  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:25:30.722189  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:30.723048  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:25:30.762666  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:25:30.800218  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:25:30.849282  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:25:30.893355  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0420 01:25:30.924642  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:25:30.956734  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:25:30.986491  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:25:31.015876  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:25:31.043860  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:25:31.073822  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:25:31.100731  141927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:25:31.119908  141927 ssh_runner.go:195] Run: openssl version
	I0420 01:25:31.128209  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:25:31.140164  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.145371  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.145432  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.151726  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:25:31.163371  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:25:31.175115  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.180237  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.180286  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.186548  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:25:31.198703  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:25:31.211529  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.217258  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.217326  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.223822  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:25:31.236363  141927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:25:31.241793  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:25:31.250826  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:25:31.259850  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:25:31.267387  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:25:31.274477  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:25:31.281452  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:25:31.287980  141927 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-907988 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907988
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:25:31.288094  141927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:25:31.288159  141927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:31.344552  141927 cri.go:89] found id: ""
	I0420 01:25:31.344646  141927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:25:31.357049  141927 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:25:31.357075  141927 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:25:31.357081  141927 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:25:31.357147  141927 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:25:31.368636  141927 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:25:31.370055  141927 kubeconfig.go:125] found "default-k8s-diff-port-907988" server: "https://192.168.39.222:8444"
	I0420 01:25:31.373063  141927 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:25:31.384821  141927 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.222
	I0420 01:25:31.384861  141927 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:25:31.384876  141927 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:25:31.384946  141927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:31.432801  141927 cri.go:89] found id: ""
	I0420 01:25:31.432902  141927 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:25:31.458842  141927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:25:31.472706  141927 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:25:31.472728  141927 kubeadm.go:156] found existing configuration files:
	
	I0420 01:25:31.472780  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0420 01:25:31.486221  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:25:31.486276  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:25:31.500036  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0420 01:25:31.510180  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:25:31.510237  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:25:31.520560  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0420 01:25:31.530333  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:25:31.530387  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:25:31.541053  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0420 01:25:31.551200  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:25:31.551257  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:25:31.561364  141927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:25:31.572967  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:31.690537  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.319980  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.546554  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.631937  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.729738  141927 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:25:32.729838  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.230769  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.730452  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.807772  141927 api_server.go:72] duration metric: took 1.07803345s to wait for apiserver process to appear ...
	I0420 01:25:33.807805  141927 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:25:33.807829  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:33.808551  141927 api_server.go:269] stopped: https://192.168.39.222:8444/healthz: Get "https://192.168.39.222:8444/healthz": dial tcp 192.168.39.222:8444: connect: connection refused
	I0420 01:25:32.342951  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:32.343373  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:32.343420  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:32.343352  143271 retry.go:31] will retry after 1.871100952s: waiting for machine to come up
	I0420 01:25:34.215884  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:34.216313  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:34.216341  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:34.216253  143271 retry.go:31] will retry after 2.017753728s: waiting for machine to come up
	I0420 01:25:36.237296  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:36.237906  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:36.237936  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:36.237856  143271 retry.go:31] will retry after 3.431912056s: waiting for machine to come up
	I0420 01:25:34.308465  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.098889  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:37.098928  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:37.098945  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.149496  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:37.149534  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:37.308936  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.313975  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:37.314005  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:37.808680  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.818747  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:37.818784  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:38.307905  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:38.318528  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:38.318563  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:38.808127  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:38.816135  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:38.816167  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:39.307985  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:39.313712  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:39.313753  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:39.808225  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:39.812825  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:39.812858  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:40.308366  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:40.312930  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:40.312970  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:40.808320  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:40.812979  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 200:
	ok
	I0420 01:25:40.820265  141927 api_server.go:141] control plane version: v1.30.0
	I0420 01:25:40.820289  141927 api_server.go:131] duration metric: took 7.012476869s to wait for apiserver health ...
	I0420 01:25:40.820298  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:25:40.820304  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:40.822367  141927 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:25:39.671070  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:39.671556  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:39.671614  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:39.671502  143271 retry.go:31] will retry after 3.954438708s: waiting for machine to come up
	I0420 01:25:40.823843  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:25:40.837960  141927 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:25:40.858294  141927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:25:40.867542  141927 system_pods.go:59] 8 kube-system pods found
	I0420 01:25:40.867577  141927 system_pods.go:61] "coredns-7db6d8ff4d-7v886" [0e0b3a5f-041a-4bbc-94aa-c9571a8761ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:25:40.867584  141927 system_pods.go:61] "etcd-default-k8s-diff-port-907988" [88f687c4-8865-4fe6-92f1-448cfde6117c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:25:40.867590  141927 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-907988" [2c9f0d90-35c6-45ad-b9b1-9504c55a1e18] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:25:40.867597  141927 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-907988" [949ce449-06b4-4650-8ba0-7567637d6aec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:25:40.867604  141927 system_pods.go:61] "kube-proxy-dg6xn" [1124d9e8-41aa-44a9-8a4a-eafd2cd6c6c9] Running
	I0420 01:25:40.867626  141927 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-907988" [df93de11-c23d-4f5d-afd4-1af7928933fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0420 01:25:40.867640  141927 system_pods.go:61] "metrics-server-569cc877fc-rqqlt" [2c7d91c3-fce8-4603-a7be-8d9b415d71f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:25:40.867647  141927 system_pods.go:61] "storage-provisioner" [af4dc99d-feef-4c24-852a-4c8cad22dd7d] Running
	I0420 01:25:40.867654  141927 system_pods.go:74] duration metric: took 9.33485ms to wait for pod list to return data ...
	I0420 01:25:40.867670  141927 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:25:40.871045  141927 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:25:40.871067  141927 node_conditions.go:123] node cpu capacity is 2
	I0420 01:25:40.871078  141927 node_conditions.go:105] duration metric: took 3.402743ms to run NodePressure ...
	I0420 01:25:40.871094  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:41.142438  141927 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:25:41.151801  141927 kubeadm.go:733] kubelet initialised
	I0420 01:25:41.151822  141927 kubeadm.go:734] duration metric: took 9.359538ms waiting for restarted kubelet to initialise ...
	I0420 01:25:41.151830  141927 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:25:41.160583  141927 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.169184  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.169214  141927 pod_ready.go:81] duration metric: took 8.596607ms for pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.169226  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.169234  141927 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.175518  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.175544  141927 pod_ready.go:81] duration metric: took 6.298273ms for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.175558  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.175567  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.189038  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.189062  141927 pod_ready.go:81] duration metric: took 13.484198ms for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.189072  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.189078  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.261162  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.261191  141927 pod_ready.go:81] duration metric: took 72.106763ms for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.261203  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.261210  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dg6xn" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.662532  141927 pod_ready.go:92] pod "kube-proxy-dg6xn" in "kube-system" namespace has status "Ready":"True"
	I0420 01:25:41.662553  141927 pod_ready.go:81] duration metric: took 401.337101ms for pod "kube-proxy-dg6xn" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.662562  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:43.670281  141927 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:45.122924  142411 start.go:364] duration metric: took 4m11.621269498s to acquireMachinesLock for "old-k8s-version-564860"
	I0420 01:25:45.122996  142411 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:45.123018  142411 fix.go:54] fixHost starting: 
	I0420 01:25:45.123538  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:45.123581  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:45.141340  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0420 01:25:45.141873  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:45.142555  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:25:45.142592  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:45.142979  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:45.143234  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:25:45.143426  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetState
	I0420 01:25:45.145067  142411 fix.go:112] recreateIfNeeded on old-k8s-version-564860: state=Stopped err=<nil>
	I0420 01:25:45.145114  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	W0420 01:25:45.145289  142411 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:45.147498  142411 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-564860" ...
	I0420 01:25:43.630616  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.631126  142057 main.go:141] libmachine: (embed-certs-269507) Found IP for machine: 192.168.50.184
	I0420 01:25:43.631159  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has current primary IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.631173  142057 main.go:141] libmachine: (embed-certs-269507) Reserving static IP address...
	I0420 01:25:43.631625  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "embed-certs-269507", mac: "52:54:00:5d:0f:ba", ip: "192.168.50.184"} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.631677  142057 main.go:141] libmachine: (embed-certs-269507) DBG | skip adding static IP to network mk-embed-certs-269507 - found existing host DHCP lease matching {name: "embed-certs-269507", mac: "52:54:00:5d:0f:ba", ip: "192.168.50.184"}
	I0420 01:25:43.631692  142057 main.go:141] libmachine: (embed-certs-269507) Reserved static IP address: 192.168.50.184
	I0420 01:25:43.631710  142057 main.go:141] libmachine: (embed-certs-269507) Waiting for SSH to be available...
	I0420 01:25:43.631731  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Getting to WaitForSSH function...
	I0420 01:25:43.634292  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.634614  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.634650  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.634833  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Using SSH client type: external
	I0420 01:25:43.634883  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa (-rw-------)
	I0420 01:25:43.634916  142057 main.go:141] libmachine: (embed-certs-269507) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:25:43.634935  142057 main.go:141] libmachine: (embed-certs-269507) DBG | About to run SSH command:
	I0420 01:25:43.634949  142057 main.go:141] libmachine: (embed-certs-269507) DBG | exit 0
	I0420 01:25:43.757712  142057 main.go:141] libmachine: (embed-certs-269507) DBG | SSH cmd err, output: <nil>: 
	I0420 01:25:43.758118  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetConfigRaw
	I0420 01:25:43.758820  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:43.761626  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.762007  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.762083  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.762328  142057 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/config.json ...
	I0420 01:25:43.762556  142057 machine.go:94] provisionDockerMachine start ...
	I0420 01:25:43.762575  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:43.762827  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:43.765841  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.766277  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.766304  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.766461  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:43.766636  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.766766  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.766884  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:43.767111  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:43.767371  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:43.767386  142057 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:25:43.874709  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:25:43.874741  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:43.875018  142057 buildroot.go:166] provisioning hostname "embed-certs-269507"
	I0420 01:25:43.875052  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:43.875265  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:43.878226  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.878645  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.878675  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.878767  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:43.878976  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.879120  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.879246  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:43.879375  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:43.879585  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:43.879613  142057 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-269507 && echo "embed-certs-269507" | sudo tee /etc/hostname
	I0420 01:25:44.003458  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-269507
	
	I0420 01:25:44.003502  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.006277  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.006706  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.006745  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.006922  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.007227  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.007417  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.007604  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.007772  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:44.007959  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:44.007979  142057 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-269507' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-269507/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-269507' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:25:44.124457  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:44.124494  142057 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:25:44.124516  142057 buildroot.go:174] setting up certificates
	I0420 01:25:44.124526  142057 provision.go:84] configureAuth start
	I0420 01:25:44.124537  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:44.124850  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:44.127589  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.127958  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.127980  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.128196  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.130485  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.130792  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.130830  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.130992  142057 provision.go:143] copyHostCerts
	I0420 01:25:44.131060  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:25:44.131075  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:25:44.131132  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:25:44.131237  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:25:44.131246  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:25:44.131266  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:25:44.131326  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:25:44.131333  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:25:44.131349  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:25:44.131397  142057 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.embed-certs-269507 san=[127.0.0.1 192.168.50.184 embed-certs-269507 localhost minikube]
	I0420 01:25:44.404404  142057 provision.go:177] copyRemoteCerts
	I0420 01:25:44.404469  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:25:44.404498  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.407318  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.407650  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.407683  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.407850  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.408033  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.408182  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.408307  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:44.498069  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:25:44.524979  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0420 01:25:44.553537  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 01:25:44.580307  142057 provision.go:87] duration metric: took 455.767679ms to configureAuth
	I0420 01:25:44.580332  142057 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:25:44.580609  142057 config.go:182] Loaded profile config "embed-certs-269507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:25:44.580722  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.583352  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.583728  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.583761  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.583978  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.584205  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.584383  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.584516  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.584715  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:44.584905  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:44.584926  142057 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:25:44.882565  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:25:44.882599  142057 machine.go:97] duration metric: took 1.120028956s to provisionDockerMachine
	I0420 01:25:44.882612  142057 start.go:293] postStartSetup for "embed-certs-269507" (driver="kvm2")
	I0420 01:25:44.882622  142057 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:25:44.882639  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:44.882971  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:25:44.883012  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.885829  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.886181  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.886208  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.886372  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.886598  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.886761  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.886915  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:44.972428  142057 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:25:44.977228  142057 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:25:44.977257  142057 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:25:44.977344  142057 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:25:44.977435  142057 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:25:44.977552  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:25:44.987372  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:45.014435  142057 start.go:296] duration metric: took 131.807177ms for postStartSetup
	I0420 01:25:45.014484  142057 fix.go:56] duration metric: took 20.699839101s for fixHost
	I0420 01:25:45.014512  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.017361  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.017768  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.017795  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.017943  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.018150  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.018302  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.018421  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.018643  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:45.018815  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:45.018827  142057 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:25:45.122766  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576345.101529100
	
	I0420 01:25:45.122788  142057 fix.go:216] guest clock: 1713576345.101529100
	I0420 01:25:45.122796  142057 fix.go:229] Guest: 2024-04-20 01:25:45.1015291 +0000 UTC Remote: 2024-04-20 01:25:45.014489313 +0000 UTC m=+293.764572165 (delta=87.039787ms)
	I0420 01:25:45.122823  142057 fix.go:200] guest clock delta is within tolerance: 87.039787ms
	I0420 01:25:45.122828  142057 start.go:83] releasing machines lock for "embed-certs-269507", held for 20.808247089s
	I0420 01:25:45.122851  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.123156  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:45.125956  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.126377  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.126408  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.126536  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127059  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127264  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127349  142057 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:25:45.127404  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.127470  142057 ssh_runner.go:195] Run: cat /version.json
	I0420 01:25:45.127497  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.130071  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130393  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130427  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.130447  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130727  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.130825  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.130854  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130932  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.131041  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.131115  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.131220  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:45.131301  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.131451  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.131597  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:45.211824  142057 ssh_runner.go:195] Run: systemctl --version
	I0420 01:25:45.236425  142057 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:25:45.383069  142057 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:25:45.391072  142057 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:25:45.391159  142057 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:25:45.410287  142057 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:25:45.410313  142057 start.go:494] detecting cgroup driver to use...
	I0420 01:25:45.410395  142057 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:25:45.433663  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:25:45.452933  142057 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:25:45.452999  142057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:25:45.473208  142057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:25:45.493261  142057 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:25:45.650111  142057 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:25:45.847482  142057 docker.go:233] disabling docker service ...
	I0420 01:25:45.847559  142057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:25:45.871032  142057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:25:45.892747  142057 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:25:46.076222  142057 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:25:46.218078  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:25:46.236006  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:25:46.259279  142057 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:25:46.259363  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.272573  142057 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:25:46.272647  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.286468  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.298708  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.313197  142057 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:25:46.332844  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.345531  142057 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.367686  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.379702  142057 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:25:46.390491  142057 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:25:46.390558  142057 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:25:46.406027  142057 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:25:46.417370  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:46.543690  142057 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:25:46.725507  142057 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:25:46.725599  142057 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:25:46.734173  142057 start.go:562] Will wait 60s for crictl version
	I0420 01:25:46.734246  142057 ssh_runner.go:195] Run: which crictl
	I0420 01:25:46.740381  142057 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:25:46.801341  142057 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:25:46.801431  142057 ssh_runner.go:195] Run: crio --version
	I0420 01:25:46.843121  142057 ssh_runner.go:195] Run: crio --version
	I0420 01:25:46.889958  142057 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:25:45.148885  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .Start
	I0420 01:25:45.149115  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring networks are active...
	I0420 01:25:45.149856  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring network default is active
	I0420 01:25:45.150205  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring network mk-old-k8s-version-564860 is active
	I0420 01:25:45.150615  142411 main.go:141] libmachine: (old-k8s-version-564860) Getting domain xml...
	I0420 01:25:45.151296  142411 main.go:141] libmachine: (old-k8s-version-564860) Creating domain...
	I0420 01:25:46.465532  142411 main.go:141] libmachine: (old-k8s-version-564860) Waiting to get IP...
	I0420 01:25:46.466816  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.467306  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.467383  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.467288  143434 retry.go:31] will retry after 265.980653ms: waiting for machine to come up
	I0420 01:25:46.735144  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.735676  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.735700  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.735627  143434 retry.go:31] will retry after 254.534112ms: waiting for machine to come up
	I0420 01:25:46.992222  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.992707  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.992738  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.992621  143434 retry.go:31] will retry after 434.179962ms: waiting for machine to come up
	I0420 01:25:47.428397  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:47.428949  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:47.428987  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:47.428899  143434 retry.go:31] will retry after 533.143168ms: waiting for machine to come up
	I0420 01:25:47.963467  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:47.964008  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:47.964035  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:47.963957  143434 retry.go:31] will retry after 601.536298ms: waiting for machine to come up
	I0420 01:25:45.675159  141927 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:48.175457  141927 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:25:48.175487  141927 pod_ready.go:81] duration metric: took 6.512916578s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:48.175499  141927 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:46.891233  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:46.894647  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:46.895107  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:46.895170  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:46.895398  142057 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0420 01:25:46.900604  142057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:46.920025  142057 kubeadm.go:877] updating cluster {Name:embed-certs-269507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:25:46.920184  142057 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:25:46.920247  142057 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:46.967086  142057 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:25:46.967171  142057 ssh_runner.go:195] Run: which lz4
	I0420 01:25:46.973391  142057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:25:46.979210  142057 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:25:46.979241  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:25:48.806615  142057 crio.go:462] duration metric: took 1.83326325s to copy over tarball
	I0420 01:25:48.806701  142057 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:25:48.567922  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:48.568436  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:48.568469  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:48.568387  143434 retry.go:31] will retry after 853.809635ms: waiting for machine to come up
	I0420 01:25:49.423590  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:49.424154  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:49.424178  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:49.424099  143434 retry.go:31] will retry after 1.096859163s: waiting for machine to come up
	I0420 01:25:50.522906  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:50.523406  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:50.523436  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:50.523350  143434 retry.go:31] will retry after 983.057252ms: waiting for machine to come up
	I0420 01:25:51.508033  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:51.508557  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:51.508596  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:51.508497  143434 retry.go:31] will retry after 1.463876638s: waiting for machine to come up
	I0420 01:25:52.974032  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:52.974508  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:52.974536  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:52.974459  143434 retry.go:31] will retry after 1.859889372s: waiting for machine to come up
	I0420 01:25:50.183489  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:53.262055  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:51.389972  142057 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.583237436s)
	I0420 01:25:51.390002  142057 crio.go:469] duration metric: took 2.583356337s to extract the tarball
	I0420 01:25:51.390010  142057 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:25:51.434741  142057 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:51.489945  142057 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:25:51.489974  142057 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:25:51.489984  142057 kubeadm.go:928] updating node { 192.168.50.184 8443 v1.30.0 crio true true} ...
	I0420 01:25:51.490126  142057 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-269507 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:25:51.490226  142057 ssh_runner.go:195] Run: crio config
	I0420 01:25:51.548273  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:25:51.548299  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:51.548316  142057 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:25:51.548356  142057 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-269507 NodeName:embed-certs-269507 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:25:51.548534  142057 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-269507"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:25:51.548614  142057 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:25:51.560359  142057 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:25:51.560428  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:25:51.571609  142057 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0420 01:25:51.594462  142057 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:25:51.621417  142057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0420 01:25:51.649250  142057 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0420 01:25:51.655304  142057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:51.675476  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:51.809652  142057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:25:51.829341  142057 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507 for IP: 192.168.50.184
	I0420 01:25:51.829405  142057 certs.go:194] generating shared ca certs ...
	I0420 01:25:51.829430  142057 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:25:51.829627  142057 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:25:51.829687  142057 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:25:51.829697  142057 certs.go:256] generating profile certs ...
	I0420 01:25:51.829823  142057 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/client.key
	I0420 01:25:52.088423  142057 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.key.c1e63643
	I0420 01:25:52.088542  142057 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.key
	I0420 01:25:52.088748  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:25:52.088811  142057 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:25:52.088841  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:25:52.088880  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:25:52.088919  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:25:52.088959  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:25:52.089020  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:52.090046  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:25:52.130739  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:25:52.163426  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:25:52.202470  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:25:52.232070  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0420 01:25:52.265640  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:25:52.305670  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:25:52.336788  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:25:52.371507  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:25:52.403015  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:25:52.433761  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:25:52.461373  142057 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:25:52.480675  142057 ssh_runner.go:195] Run: openssl version
	I0420 01:25:52.486965  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:25:52.499466  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.506355  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.506409  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.514625  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:25:52.530107  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:25:52.544051  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.549426  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.549495  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.555960  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:25:52.569332  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:25:52.583057  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.588323  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.588390  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.594622  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:25:52.607021  142057 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:25:52.612270  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:25:52.619182  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:25:52.626168  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:25:52.633276  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:25:52.639840  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:25:52.646478  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:25:52.652982  142057 kubeadm.go:391] StartCluster: {Name:embed-certs-269507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:25:52.653130  142057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:25:52.653182  142057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:52.699113  142057 cri.go:89] found id: ""
	I0420 01:25:52.699200  142057 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:25:52.712835  142057 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:25:52.712859  142057 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:25:52.712867  142057 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:25:52.712914  142057 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:25:52.726130  142057 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:25:52.727354  142057 kubeconfig.go:125] found "embed-certs-269507" server: "https://192.168.50.184:8443"
	I0420 01:25:52.729600  142057 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:25:52.744185  142057 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.184
	I0420 01:25:52.744217  142057 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:25:52.744231  142057 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:25:52.744292  142057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:52.792889  142057 cri.go:89] found id: ""
	I0420 01:25:52.792967  142057 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:25:52.812771  142057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:25:52.824478  142057 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:25:52.824495  142057 kubeadm.go:156] found existing configuration files:
	
	I0420 01:25:52.824533  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:25:52.835612  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:25:52.835679  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:25:52.847089  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:25:52.858049  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:25:52.858126  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:25:52.872787  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:25:52.886588  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:25:52.886649  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:25:52.899467  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:25:52.910884  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:25:52.910942  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:25:52.922217  142057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:25:52.933432  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:53.108167  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.044709  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.257949  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.327450  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.426738  142057 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:25:54.426849  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:54.926955  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:55.427198  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:55.489075  142057 api_server.go:72] duration metric: took 1.06233038s to wait for apiserver process to appear ...
	I0420 01:25:55.489109  142057 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:25:55.489137  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:55.489682  142057 api_server.go:269] stopped: https://192.168.50.184:8443/healthz: Get "https://192.168.50.184:8443/healthz": dial tcp 192.168.50.184:8443: connect: connection refused
	I0420 01:25:55.989278  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:54.836137  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:54.836639  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:54.836670  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:54.836584  143434 retry.go:31] will retry after 2.172259495s: waiting for machine to come up
	I0420 01:25:57.011412  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:57.011810  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:57.011840  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:57.011782  143434 retry.go:31] will retry after 2.279304552s: waiting for machine to come up
	I0420 01:25:55.684205  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:57.686312  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:58.334562  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:58.334594  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:58.334614  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.344779  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:58.344814  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:58.490111  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.499158  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:58.499194  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:58.989417  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.996443  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:58.996477  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:59.489585  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:59.496235  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:59.496271  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:59.989892  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:59.994154  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0420 01:26:00.000276  142057 api_server.go:141] control plane version: v1.30.0
	I0420 01:26:00.000301  142057 api_server.go:131] duration metric: took 4.511183577s to wait for apiserver health ...
	I0420 01:26:00.000311  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:26:00.000317  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:00.002217  142057 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:26:00.003646  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:26:00.018114  142057 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:26:00.040866  142057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:26:00.050481  142057 system_pods.go:59] 8 kube-system pods found
	I0420 01:26:00.050514  142057 system_pods.go:61] "coredns-7db6d8ff4d-79bzc" [af5f0029-75b5-4131-8c60-5a4fee48c618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:26:00.050524  142057 system_pods.go:61] "etcd-embed-certs-269507" [d6dfc301-0cfb-4bfb-99f7-948b77b38f53] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:26:00.050533  142057 system_pods.go:61] "kube-apiserver-embed-certs-269507" [915deee2-f571-4337-bcdc-07f40d06b9c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:26:00.050539  142057 system_pods.go:61] "kube-controller-manager-embed-certs-269507" [21c885b0-6d1b-4593-87f3-141e512af7dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:26:00.050545  142057 system_pods.go:61] "kube-proxy-crzk6" [d5972e9a-15cd-4b62-90d5-c10bdfa20989] Running
	I0420 01:26:00.050553  142057 system_pods.go:61] "kube-scheduler-embed-certs-269507" [1e556102-d4c9-494c-baf2-ab7e62d7d1e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0420 01:26:00.050559  142057 system_pods.go:61] "metrics-server-569cc877fc-8s79l" [1dc06e4a-3f47-4ef1-8757-81262c52fe55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:26:00.050583  142057 system_pods.go:61] "storage-provisioner" [f7b03907-0042-48d8-981b-1b8e665d58e7] Running
	I0420 01:26:00.050600  142057 system_pods.go:74] duration metric: took 9.699819ms to wait for pod list to return data ...
	I0420 01:26:00.050608  142057 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:26:00.053915  142057 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:26:00.053964  142057 node_conditions.go:123] node cpu capacity is 2
	I0420 01:26:00.053975  142057 node_conditions.go:105] duration metric: took 3.363162ms to run NodePressure ...
	I0420 01:26:00.053994  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:00.327736  142057 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:26:00.332409  142057 kubeadm.go:733] kubelet initialised
	I0420 01:26:00.332434  142057 kubeadm.go:734] duration metric: took 4.671334ms waiting for restarted kubelet to initialise ...
	I0420 01:26:00.332446  142057 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:26:00.338296  142057 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:59.292382  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:59.292905  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:59.292939  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:59.292852  143434 retry.go:31] will retry after 4.056028382s: waiting for machine to come up
	I0420 01:26:03.350591  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:03.351022  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:26:03.351047  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:26:03.350978  143434 retry.go:31] will retry after 5.38819739s: waiting for machine to come up
	I0420 01:26:00.184338  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:02.684685  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:02.345607  142057 pod_ready.go:102] pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:03.850887  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:03.850915  142057 pod_ready.go:81] duration metric: took 3.512592061s for pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:03.850929  142057 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:05.857665  142057 pod_ready.go:102] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:05.183082  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:07.682906  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:10.191165  141746 start.go:364] duration metric: took 1m1.9514957s to acquireMachinesLock for "no-preload-338118"
	I0420 01:26:10.191222  141746 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:26:10.191235  141746 fix.go:54] fixHost starting: 
	I0420 01:26:10.191624  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:26:10.191668  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:26:10.212169  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34829
	I0420 01:26:10.212568  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:26:10.213074  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:26:10.213120  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:26:10.213524  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:26:10.213755  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:10.213957  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:26:10.215578  141746 fix.go:112] recreateIfNeeded on no-preload-338118: state=Stopped err=<nil>
	I0420 01:26:10.215604  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	W0420 01:26:10.215788  141746 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:26:10.217632  141746 out.go:177] * Restarting existing kvm2 VM for "no-preload-338118" ...
	I0420 01:26:10.218915  141746 main.go:141] libmachine: (no-preload-338118) Calling .Start
	I0420 01:26:10.219094  141746 main.go:141] libmachine: (no-preload-338118) Ensuring networks are active...
	I0420 01:26:10.219820  141746 main.go:141] libmachine: (no-preload-338118) Ensuring network default is active
	I0420 01:26:10.220181  141746 main.go:141] libmachine: (no-preload-338118) Ensuring network mk-no-preload-338118 is active
	I0420 01:26:10.220584  141746 main.go:141] libmachine: (no-preload-338118) Getting domain xml...
	I0420 01:26:10.221275  141746 main.go:141] libmachine: (no-preload-338118) Creating domain...
	I0420 01:26:08.363522  142057 pod_ready.go:102] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:09.858701  142057 pod_ready.go:92] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:09.858731  142057 pod_ready.go:81] duration metric: took 6.007793209s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:09.858742  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:08.743367  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.743867  142411 main.go:141] libmachine: (old-k8s-version-564860) Found IP for machine: 192.168.61.91
	I0420 01:26:08.743896  142411 main.go:141] libmachine: (old-k8s-version-564860) Reserving static IP address...
	I0420 01:26:08.743914  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has current primary IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.744294  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "old-k8s-version-564860", mac: "52:54:00:9d:63:09", ip: "192.168.61.91"} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.744324  142411 main.go:141] libmachine: (old-k8s-version-564860) Reserved static IP address: 192.168.61.91
	I0420 01:26:08.744344  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | skip adding static IP to network mk-old-k8s-version-564860 - found existing host DHCP lease matching {name: "old-k8s-version-564860", mac: "52:54:00:9d:63:09", ip: "192.168.61.91"}
	I0420 01:26:08.744368  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Getting to WaitForSSH function...
	I0420 01:26:08.744387  142411 main.go:141] libmachine: (old-k8s-version-564860) Waiting for SSH to be available...
	I0420 01:26:08.746714  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.747119  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.747155  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.747278  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Using SSH client type: external
	I0420 01:26:08.747314  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa (-rw-------)
	I0420 01:26:08.747346  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:26:08.747359  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | About to run SSH command:
	I0420 01:26:08.747373  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | exit 0
	I0420 01:26:08.877633  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | SSH cmd err, output: <nil>: 
	I0420 01:26:08.878016  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetConfigRaw
	I0420 01:26:08.878715  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:08.881556  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.881982  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.882028  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.882326  142411 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/config.json ...
	I0420 01:26:08.882586  142411 machine.go:94] provisionDockerMachine start ...
	I0420 01:26:08.882613  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:08.882853  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:08.885133  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.885479  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.885510  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.885647  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:08.885843  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:08.886029  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:08.886192  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:08.886403  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:08.886642  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:08.886657  142411 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:26:09.006625  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:26:09.006655  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.006914  142411 buildroot.go:166] provisioning hostname "old-k8s-version-564860"
	I0420 01:26:09.006940  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.007144  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.010016  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.010349  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.010374  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.010597  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.010841  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.011040  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.011235  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.011439  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.011682  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.011718  142411 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-564860 && echo "old-k8s-version-564860" | sudo tee /etc/hostname
	I0420 01:26:09.155581  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-564860
	
	I0420 01:26:09.155612  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.158583  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.159021  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.159068  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.159285  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.159519  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.159747  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.159933  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.160128  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.160362  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.160390  142411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-564860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-564860/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-564860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:26:09.288804  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:26:09.288834  142411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:26:09.288856  142411 buildroot.go:174] setting up certificates
	I0420 01:26:09.288867  142411 provision.go:84] configureAuth start
	I0420 01:26:09.288877  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.289286  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:09.292454  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.292884  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.292923  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.293076  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.295234  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.295537  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.295565  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.295675  142411 provision.go:143] copyHostCerts
	I0420 01:26:09.295747  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:26:09.295758  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:26:09.295811  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:26:09.295936  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:26:09.295951  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:26:09.295981  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:26:09.296063  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:26:09.296075  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:26:09.296095  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:26:09.296154  142411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-564860 san=[127.0.0.1 192.168.61.91 localhost minikube old-k8s-version-564860]
	I0420 01:26:09.436313  142411 provision.go:177] copyRemoteCerts
	I0420 01:26:09.436373  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:26:09.436401  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.439316  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.439700  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.439743  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.439856  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.440057  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.440226  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.440360  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:09.529141  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:26:09.558376  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0420 01:26:09.586393  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:26:09.615274  142411 provision.go:87] duration metric: took 326.393984ms to configureAuth
	I0420 01:26:09.615300  142411 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:26:09.615501  142411 config.go:182] Loaded profile config "old-k8s-version-564860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:26:09.615590  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.618470  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.618905  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.618938  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.619141  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.619325  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.619505  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.619662  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.619862  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.620073  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.620091  142411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:26:09.924929  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:26:09.924958  142411 machine.go:97] duration metric: took 1.042352034s to provisionDockerMachine
	I0420 01:26:09.924973  142411 start.go:293] postStartSetup for "old-k8s-version-564860" (driver="kvm2")
	I0420 01:26:09.924985  142411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:26:09.925021  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:09.925441  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:26:09.925485  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.927985  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.928377  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.928407  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.928565  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.928770  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.928944  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.929114  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.020189  142411 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:26:10.025578  142411 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:26:10.025607  142411 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:26:10.025707  142411 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:26:10.025795  142411 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:26:10.025888  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:26:10.038138  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:10.065063  142411 start.go:296] duration metric: took 140.07164ms for postStartSetup
	I0420 01:26:10.065111  142411 fix.go:56] duration metric: took 24.94209431s for fixHost
	I0420 01:26:10.065139  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.068099  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.068493  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.068544  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.068697  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.068916  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.069114  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.069255  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.069455  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:10.069662  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:10.069678  142411 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:26:10.190955  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576370.174630368
	
	I0420 01:26:10.190984  142411 fix.go:216] guest clock: 1713576370.174630368
	I0420 01:26:10.190994  142411 fix.go:229] Guest: 2024-04-20 01:26:10.174630368 +0000 UTC Remote: 2024-04-20 01:26:10.065116719 +0000 UTC m=+276.709087933 (delta=109.513649ms)
	I0420 01:26:10.191036  142411 fix.go:200] guest clock delta is within tolerance: 109.513649ms
	I0420 01:26:10.191044  142411 start.go:83] releasing machines lock for "old-k8s-version-564860", held for 25.068071712s
	I0420 01:26:10.191074  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.191368  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:10.194872  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.195333  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.195365  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.195510  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196060  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196253  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196331  142411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:26:10.196375  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.196439  142411 ssh_runner.go:195] Run: cat /version.json
	I0420 01:26:10.196467  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.199156  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199522  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199557  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.199572  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199760  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.199975  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.200098  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.200137  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.200165  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.200326  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.200700  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.200857  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.200992  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.201150  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.283430  142411 ssh_runner.go:195] Run: systemctl --version
	I0420 01:26:10.310703  142411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:26:10.462457  142411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:26:10.470897  142411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:26:10.470993  142411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:26:10.489867  142411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:26:10.489899  142411 start.go:494] detecting cgroup driver to use...
	I0420 01:26:10.489996  142411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:26:10.512741  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:26:10.530013  142411 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:26:10.530077  142411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:26:10.548567  142411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:26:10.565645  142411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:26:10.693390  142411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:26:10.878889  142411 docker.go:233] disabling docker service ...
	I0420 01:26:10.878973  142411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:26:10.901233  142411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:26:10.915219  142411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:26:11.053815  142411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:26:11.201766  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:26:11.218569  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:26:11.240543  142411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0420 01:26:11.240604  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.253384  142411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:26:11.253460  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.268703  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.281575  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.296477  142411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:26:11.312458  142411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:26:11.328008  142411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:26:11.328076  142411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:26:11.349027  142411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:26:11.362064  142411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:11.500624  142411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:26:11.665985  142411 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:26:11.666061  142411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:26:11.672929  142411 start.go:562] Will wait 60s for crictl version
	I0420 01:26:11.673006  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:11.678398  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:26:11.727572  142411 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:26:11.727663  142411 ssh_runner.go:195] Run: crio --version
	I0420 01:26:11.760504  142411 ssh_runner.go:195] Run: crio --version
	I0420 01:26:11.803463  142411 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0420 01:26:11.804782  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:11.807755  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:11.808135  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:11.808177  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:11.808396  142411 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0420 01:26:11.813653  142411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:11.830618  142411 kubeadm.go:877] updating cluster {Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:26:11.830793  142411 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:26:11.830874  142411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:11.889149  142411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:26:11.889218  142411 ssh_runner.go:195] Run: which lz4
	I0420 01:26:11.894461  142411 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:26:11.900427  142411 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:26:11.900456  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0420 01:26:10.183110  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:12.184209  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:11.636722  141746 main.go:141] libmachine: (no-preload-338118) Waiting to get IP...
	I0420 01:26:11.637635  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:11.638048  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:11.638135  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:11.638011  143635 retry.go:31] will retry after 264.135122ms: waiting for machine to come up
	I0420 01:26:11.903486  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:11.904008  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:11.904053  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:11.903958  143635 retry.go:31] will retry after 367.952741ms: waiting for machine to come up
	I0420 01:26:12.273951  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:12.274547  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:12.274584  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:12.274491  143635 retry.go:31] will retry after 390.958735ms: waiting for machine to come up
	I0420 01:26:12.667348  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:12.667888  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:12.667915  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:12.667820  143635 retry.go:31] will retry after 554.212994ms: waiting for machine to come up
	I0420 01:26:13.223423  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:13.224158  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:13.224184  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:13.224058  143635 retry.go:31] will retry after 686.102207ms: waiting for machine to come up
	I0420 01:26:13.911430  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:13.912019  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:13.912042  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:13.911968  143635 retry.go:31] will retry after 875.263983ms: waiting for machine to come up
	I0420 01:26:14.788949  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:14.789431  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:14.789481  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:14.789392  143635 retry.go:31] will retry after 847.129796ms: waiting for machine to come up
	I0420 01:26:15.637863  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:15.638348  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:15.638379  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:15.638288  143635 retry.go:31] will retry after 1.162423805s: waiting for machine to come up
	I0420 01:26:11.866297  142057 pod_ready.go:102] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:13.868499  142057 pod_ready.go:102] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:14.867208  142057 pod_ready.go:92] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.867241  142057 pod_ready.go:81] duration metric: took 5.008488667s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.867254  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.875100  142057 pod_ready.go:92] pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.875119  142057 pod_ready.go:81] duration metric: took 7.856647ms for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.875131  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-crzk6" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.880630  142057 pod_ready.go:92] pod "kube-proxy-crzk6" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.880651  142057 pod_ready.go:81] duration metric: took 5.512379ms for pod "kube-proxy-crzk6" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.880661  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.885625  142057 pod_ready.go:92] pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.885645  142057 pod_ready.go:81] duration metric: took 4.976632ms for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.885656  142057 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.031960  142411 crio.go:462] duration metric: took 2.137532848s to copy over tarball
	I0420 01:26:14.032043  142411 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:26:17.581625  142411 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.549548059s)
	I0420 01:26:17.581660  142411 crio.go:469] duration metric: took 3.549666471s to extract the tarball
	I0420 01:26:17.581672  142411 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:26:17.633172  142411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:17.679514  142411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:26:17.679544  142411 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0420 01:26:17.679710  142411 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.679940  142411 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.680051  142411 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.680061  142411 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.680225  142411 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.680266  142411 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0420 01:26:17.680442  142411 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.680516  142411 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.682336  142411 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.682425  142411 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0420 01:26:17.682428  142411 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.682462  142411 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.682341  142411 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.682512  142411 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.682952  142411 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.682955  142411 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.846602  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.850673  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.866812  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.871983  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.876346  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0420 01:26:17.876745  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.881269  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.985788  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.997662  142411 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0420 01:26:17.997709  142411 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0420 01:26:17.997716  142411 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.997751  142411 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.997778  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:17.997797  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.071610  142411 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0420 01:26:18.071682  142411 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:18.071705  142411 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0420 01:26:18.071741  142411 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:18.071760  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.071793  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.085631  142411 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0420 01:26:18.085689  142411 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0420 01:26:18.085748  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.087239  142411 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0420 01:26:18.087288  142411 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:18.087362  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.094891  142411 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0420 01:26:18.094940  142411 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:18.094989  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.232524  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:18.232595  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:18.232613  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0420 01:26:18.232649  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0420 01:26:18.232595  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:18.232682  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:18.232710  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:14.684499  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:17.185481  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:16.802494  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:16.802977  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:16.803009  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:16.802908  143635 retry.go:31] will retry after 1.370900633s: waiting for machine to come up
	I0420 01:26:18.175474  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:18.175996  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:18.176022  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:18.175943  143635 retry.go:31] will retry after 1.698879408s: waiting for machine to come up
	I0420 01:26:19.876437  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:19.876901  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:19.876932  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:19.876843  143635 retry.go:31] will retry after 2.622833508s: waiting for machine to come up
	I0420 01:26:16.894119  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:18.894941  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:18.408724  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0420 01:26:18.408791  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0420 01:26:18.410041  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0420 01:26:18.410136  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0420 01:26:18.424042  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0420 01:26:18.428203  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0420 01:26:18.428295  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0420 01:26:18.450170  142411 cache_images.go:92] duration metric: took 770.600266ms to LoadCachedImages
	W0420 01:26:18.450288  142411 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0420 01:26:18.450305  142411 kubeadm.go:928] updating node { 192.168.61.91 8443 v1.20.0 crio true true} ...
	I0420 01:26:18.450428  142411 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-564860 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:26:18.450522  142411 ssh_runner.go:195] Run: crio config
	I0420 01:26:18.503362  142411 cni.go:84] Creating CNI manager for ""
	I0420 01:26:18.503407  142411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:18.503427  142411 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:26:18.503463  142411 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-564860 NodeName:old-k8s-version-564860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0420 01:26:18.503671  142411 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-564860"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:26:18.503745  142411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0420 01:26:18.516393  142411 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:26:18.516475  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:26:18.529038  142411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0420 01:26:18.550442  142411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:26:18.572012  142411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0420 01:26:18.595682  142411 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I0420 01:26:18.602036  142411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:18.622226  142411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:18.774466  142411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:26:18.795074  142411 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860 for IP: 192.168.61.91
	I0420 01:26:18.795104  142411 certs.go:194] generating shared ca certs ...
	I0420 01:26:18.795125  142411 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:18.795301  142411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:26:18.795342  142411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:26:18.795352  142411 certs.go:256] generating profile certs ...
	I0420 01:26:18.795433  142411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/client.key
	I0420 01:26:18.795487  142411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key.d235183f
	I0420 01:26:18.795524  142411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key
	I0420 01:26:18.795645  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:26:18.795675  142411 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:26:18.795685  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:26:18.795706  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:26:18.795735  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:26:18.795765  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:26:18.795828  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:18.796607  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:26:18.845581  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:26:18.891065  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:26:18.933536  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:26:18.977381  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0420 01:26:19.009816  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:26:19.042053  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:26:19.090614  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:26:19.119554  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:26:19.147545  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:26:19.177775  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:26:19.211008  142411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:26:19.234399  142411 ssh_runner.go:195] Run: openssl version
	I0420 01:26:19.242808  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:26:19.256132  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.261681  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.261739  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.270546  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:26:19.284112  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:26:19.296998  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.302497  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.302551  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.310883  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:26:19.325130  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:26:19.338964  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.344915  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.344986  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.351926  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:26:19.366428  142411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:26:19.372391  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:26:19.379606  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:26:19.386698  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:26:19.395102  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:26:19.401981  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:26:19.409477  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:26:19.416444  142411 kubeadm.go:391] StartCluster: {Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:26:19.416557  142411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:26:19.416600  142411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:19.460782  142411 cri.go:89] found id: ""
	I0420 01:26:19.460884  142411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:26:19.473812  142411 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:26:19.473832  142411 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:26:19.473838  142411 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:26:19.473899  142411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:26:19.486686  142411 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:26:19.487757  142411 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-564860" does not appear in /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:26:19.488411  142411 kubeconfig.go:62] /home/jenkins/minikube-integration/18703-76456/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-564860" cluster setting kubeconfig missing "old-k8s-version-564860" context setting]
	I0420 01:26:19.489438  142411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:19.491237  142411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:26:19.503483  142411 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.91
	I0420 01:26:19.503519  142411 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:26:19.503530  142411 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:26:19.503597  142411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:19.546350  142411 cri.go:89] found id: ""
	I0420 01:26:19.546438  142411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:26:19.568177  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:26:19.580545  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:26:19.580573  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:26:19.580658  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:26:19.592945  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:26:19.593010  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:26:19.605598  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:26:19.617261  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:26:19.617346  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:26:19.629242  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:26:19.640143  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:26:19.640211  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:26:19.654226  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:26:19.666207  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:26:19.666275  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:26:19.678899  142411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:26:19.694374  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:19.845435  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:20.619142  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:20.891265  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:21.020834  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:21.124545  142411 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:26:21.124652  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:21.625462  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:22.125171  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:22.625565  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:23.125077  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:19.685129  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:22.183561  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:22.502227  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:22.502665  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:22.502696  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:22.502603  143635 retry.go:31] will retry after 3.3877716s: waiting for machine to come up
	I0420 01:26:21.392042  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:23.392579  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:25.394230  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:23.625392  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.125446  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.625035  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:25.125592  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:25.624718  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:26.124803  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:26.625420  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:27.125162  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:27.625475  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:28.125637  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.685014  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:27.182545  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:25.891769  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:25.892321  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:25.892353  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:25.892252  143635 retry.go:31] will retry after 3.395760477s: waiting for machine to come up
	I0420 01:26:29.290361  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:29.290858  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:29.290907  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:29.290791  143635 retry.go:31] will retry after 4.86761736s: waiting for machine to come up
	I0420 01:26:27.892903  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:30.392680  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:28.625781  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.125145  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.625647  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:30.125081  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:30.625404  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:31.124753  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:31.625565  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:32.124750  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:32.624841  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:33.125120  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.682707  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:31.682790  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:33.683549  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:34.162306  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.162883  141746 main.go:141] libmachine: (no-preload-338118) Found IP for machine: 192.168.72.89
	I0420 01:26:34.162912  141746 main.go:141] libmachine: (no-preload-338118) Reserving static IP address...
	I0420 01:26:34.162928  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has current primary IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.163266  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "no-preload-338118", mac: "52:54:00:14:65:26", ip: "192.168.72.89"} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.163296  141746 main.go:141] libmachine: (no-preload-338118) Reserved static IP address: 192.168.72.89
	I0420 01:26:34.163316  141746 main.go:141] libmachine: (no-preload-338118) DBG | skip adding static IP to network mk-no-preload-338118 - found existing host DHCP lease matching {name: "no-preload-338118", mac: "52:54:00:14:65:26", ip: "192.168.72.89"}
	I0420 01:26:34.163335  141746 main.go:141] libmachine: (no-preload-338118) DBG | Getting to WaitForSSH function...
	I0420 01:26:34.163350  141746 main.go:141] libmachine: (no-preload-338118) Waiting for SSH to be available...
	I0420 01:26:34.165641  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.165947  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.165967  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.166136  141746 main.go:141] libmachine: (no-preload-338118) DBG | Using SSH client type: external
	I0420 01:26:34.166161  141746 main.go:141] libmachine: (no-preload-338118) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa (-rw-------)
	I0420 01:26:34.166190  141746 main.go:141] libmachine: (no-preload-338118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:26:34.166216  141746 main.go:141] libmachine: (no-preload-338118) DBG | About to run SSH command:
	I0420 01:26:34.166232  141746 main.go:141] libmachine: (no-preload-338118) DBG | exit 0
	I0420 01:26:34.293435  141746 main.go:141] libmachine: (no-preload-338118) DBG | SSH cmd err, output: <nil>: 
	I0420 01:26:34.293789  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetConfigRaw
	I0420 01:26:34.294381  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:34.296958  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.297355  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.297391  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.297670  141746 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/config.json ...
	I0420 01:26:34.297915  141746 machine.go:94] provisionDockerMachine start ...
	I0420 01:26:34.297945  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:34.298191  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.300645  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.301042  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.301068  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.301280  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.301496  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.301719  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.301895  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.302104  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.302272  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.302284  141746 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:26:34.419082  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:26:34.419113  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.419424  141746 buildroot.go:166] provisioning hostname "no-preload-338118"
	I0420 01:26:34.419452  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.419715  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.422630  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.423010  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.423052  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.423212  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.423415  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.423599  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.423716  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.423928  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.424135  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.424149  141746 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-338118 && echo "no-preload-338118" | sudo tee /etc/hostname
	I0420 01:26:34.555223  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-338118
	
	I0420 01:26:34.555254  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.558217  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.558606  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.558643  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.558792  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.558999  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.559241  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.559423  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.559655  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.559827  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.559844  141746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-338118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-338118/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-338118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:26:34.684192  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:26:34.684226  141746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:26:34.684261  141746 buildroot.go:174] setting up certificates
	I0420 01:26:34.684270  141746 provision.go:84] configureAuth start
	I0420 01:26:34.684289  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.684581  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:34.687363  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.687703  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.687733  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.687876  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.690220  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.690542  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.690569  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.690739  141746 provision.go:143] copyHostCerts
	I0420 01:26:34.690806  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:26:34.690817  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:26:34.690869  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:26:34.691006  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:26:34.691017  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:26:34.691038  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:26:34.691103  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:26:34.691111  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:26:34.691130  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:26:34.691178  141746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.no-preload-338118 san=[127.0.0.1 192.168.72.89 localhost minikube no-preload-338118]
	I0420 01:26:34.899595  141746 provision.go:177] copyRemoteCerts
	I0420 01:26:34.899652  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:26:34.899676  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.902298  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.902745  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.902777  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.902956  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.903150  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.903309  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.903457  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:34.993263  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:26:35.024837  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0420 01:26:35.054254  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 01:26:35.082455  141746 provision.go:87] duration metric: took 398.171071ms to configureAuth
	I0420 01:26:35.082488  141746 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:26:35.082741  141746 config.go:182] Loaded profile config "no-preload-338118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:26:35.082822  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.085868  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.086264  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.086313  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.086481  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.086708  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.086868  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.087051  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.087254  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:35.087424  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:35.087440  141746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:26:35.374277  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:26:35.374305  141746 machine.go:97] duration metric: took 1.076369907s to provisionDockerMachine
	I0420 01:26:35.374327  141746 start.go:293] postStartSetup for "no-preload-338118" (driver="kvm2")
	I0420 01:26:35.374342  141746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:26:35.374366  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.374733  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:26:35.374787  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.378647  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.378998  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.379038  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.379149  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.379353  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.379518  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.379694  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:35.468711  141746 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:26:35.473783  141746 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:26:35.473808  141746 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:26:35.473929  141746 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:26:35.474088  141746 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:26:35.474217  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:26:35.484161  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:35.511695  141746 start.go:296] duration metric: took 137.354669ms for postStartSetup
	I0420 01:26:35.511751  141746 fix.go:56] duration metric: took 25.320502022s for fixHost
	I0420 01:26:35.511780  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.514635  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.515042  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.515067  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.515247  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.515448  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.515663  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.515814  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.515988  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:35.516218  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:35.516240  141746 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:26:35.632029  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576395.615634246
	
	I0420 01:26:35.632057  141746 fix.go:216] guest clock: 1713576395.615634246
	I0420 01:26:35.632067  141746 fix.go:229] Guest: 2024-04-20 01:26:35.615634246 +0000 UTC Remote: 2024-04-20 01:26:35.511757232 +0000 UTC m=+369.861721674 (delta=103.877014ms)
	I0420 01:26:35.632113  141746 fix.go:200] guest clock delta is within tolerance: 103.877014ms
	I0420 01:26:35.632137  141746 start.go:83] releasing machines lock for "no-preload-338118", held for 25.440933699s
	I0420 01:26:35.632168  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.632486  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:35.635888  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.636400  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.636440  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.636751  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637250  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637448  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637547  141746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:26:35.637597  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.637694  141746 ssh_runner.go:195] Run: cat /version.json
	I0420 01:26:35.637720  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.640562  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.640800  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.640953  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.640969  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.641244  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.641389  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.641433  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.641486  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.641644  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.641670  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.641806  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:35.641873  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.641997  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.642163  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:32.892859  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:34.893134  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:35.749528  141746 ssh_runner.go:195] Run: systemctl --version
	I0420 01:26:35.756960  141746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:26:35.912075  141746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:26:35.920264  141746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:26:35.920355  141746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:26:35.937729  141746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:26:35.937753  141746 start.go:494] detecting cgroup driver to use...
	I0420 01:26:35.937811  141746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:26:35.954425  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:26:35.970967  141746 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:26:35.971023  141746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:26:35.986186  141746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:26:36.000803  141746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:26:36.114673  141746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:26:36.273386  141746 docker.go:233] disabling docker service ...
	I0420 01:26:36.273472  141746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:26:36.290471  141746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:26:36.305722  141746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:26:36.459528  141746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:26:36.609105  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:26:36.627255  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:26:36.651459  141746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:26:36.651535  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.663171  141746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:26:36.663255  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.674706  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.686196  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.697909  141746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:26:36.709625  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.720746  141746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.740333  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.752898  141746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:26:36.764600  141746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:26:36.764653  141746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:26:36.780697  141746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:26:36.791440  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:36.936761  141746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:26:37.095374  141746 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:26:37.095475  141746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:26:37.101601  141746 start.go:562] Will wait 60s for crictl version
	I0420 01:26:37.101673  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.106191  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:26:37.152257  141746 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:26:37.152361  141746 ssh_runner.go:195] Run: crio --version
	I0420 01:26:37.187172  141746 ssh_runner.go:195] Run: crio --version
	I0420 01:26:37.225203  141746 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:26:33.625596  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:34.124972  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:34.624791  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:35.125630  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:35.624815  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.125677  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.625631  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:37.125592  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:37.624883  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:38.124924  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.183893  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:38.184381  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:37.226708  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:37.229679  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:37.230090  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:37.230131  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:37.230253  141746 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0420 01:26:37.234914  141746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:37.249029  141746 kubeadm.go:877] updating cluster {Name:no-preload-338118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:26:37.249155  141746 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:26:37.249208  141746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:37.287235  141746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:26:37.287270  141746 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0420 01:26:37.287341  141746 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.287379  141746 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.287387  141746 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.287363  141746 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.287414  141746 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.287378  141746 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.287399  141746 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.287365  141746 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0420 01:26:37.288833  141746 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.288849  141746 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.288863  141746 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.288922  141746 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.288933  141746 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.288831  141746 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.288957  141746 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0420 01:26:37.288985  141746 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.452705  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.462178  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.463495  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0420 01:26:37.469562  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.480726  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.501069  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.517291  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.533934  141746 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0420 01:26:37.533976  141746 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.534032  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.578341  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.602332  141746 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0420 01:26:37.602381  141746 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.602432  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.718979  141746 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0420 01:26:37.719028  141746 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0420 01:26:37.719065  141746 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0420 01:26:37.719093  141746 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.719100  141746 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0420 01:26:37.719126  141746 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.719153  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719220  141746 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0420 01:26:37.719256  141746 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.719067  141746 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.719155  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719306  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.719309  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719036  141746 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.719369  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719154  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.719297  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.733974  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.802462  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.802496  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.802544  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0420 01:26:37.802575  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0420 01:26:37.802637  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.802648  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:37.802648  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.802708  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.802725  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0420 01:26:37.802788  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:37.897150  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0420 01:26:37.897190  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0420 01:26:37.897259  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:37.897268  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0420 01:26:37.897278  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:37.897285  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.897295  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0420 01:26:37.897337  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.902046  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0420 01:26:37.902094  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0420 01:26:37.902151  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:37.902307  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0420 01:26:37.902399  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:37.914016  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0420 01:26:40.184815  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.287511777s)
	I0420 01:26:40.184859  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0420 01:26:40.184918  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.282742718s)
	I0420 01:26:40.184951  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.282534359s)
	I0420 01:26:40.184974  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0420 01:26:40.184981  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0420 01:26:40.185052  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.287690505s)
	I0420 01:26:40.185081  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0420 01:26:40.185113  141746 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:40.185175  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:37.392757  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:39.394094  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:38.624766  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:39.125330  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:39.624953  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.125409  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.625125  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:41.125460  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:41.625041  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:42.125103  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:42.624948  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:43.125237  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.186531  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:42.683524  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:42.252666  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.067465398s)
	I0420 01:26:42.252710  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0420 01:26:42.252735  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:42.252774  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:44.616564  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.363755421s)
	I0420 01:26:44.616614  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0420 01:26:44.616649  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:44.616713  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:41.394300  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:43.895493  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:43.625155  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:44.124986  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:44.624957  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.125834  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.625359  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:46.125706  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:46.625115  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:47.125204  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:47.625746  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:48.124803  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.183628  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:47.684002  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:46.894590  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.277850916s)
	I0420 01:26:46.894626  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0420 01:26:46.894655  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:46.894712  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:49.158327  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.263583483s)
	I0420 01:26:49.158370  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0420 01:26:49.158406  141746 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:49.158478  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:50.223297  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.06478687s)
	I0420 01:26:50.223344  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0420 01:26:50.223382  141746 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:50.223452  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:46.393020  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:48.394414  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:50.893840  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:48.624957  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:49.125441  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:49.625078  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.124787  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.624817  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:51.125211  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:51.625408  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:52.124903  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:52.624826  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:53.124728  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.183173  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:52.183563  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:54.187354  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.963876859s)
	I0420 01:26:54.187388  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0420 01:26:54.187416  141746 cache_images.go:123] Successfully loaded all cached images
	I0420 01:26:54.187426  141746 cache_images.go:92] duration metric: took 16.900140079s to LoadCachedImages
	I0420 01:26:54.187439  141746 kubeadm.go:928] updating node { 192.168.72.89 8443 v1.30.0 crio true true} ...
	I0420 01:26:54.187545  141746 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-338118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:26:54.187608  141746 ssh_runner.go:195] Run: crio config
	I0420 01:26:54.245888  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:26:54.245914  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:54.245928  141746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:26:54.245954  141746 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.89 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-338118 NodeName:no-preload-338118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:26:54.246153  141746 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-338118"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:26:54.246232  141746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:26:54.259262  141746 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:26:54.259360  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:26:54.270769  141746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0420 01:26:54.290436  141746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:26:54.311846  141746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0420 01:26:54.332517  141746 ssh_runner.go:195] Run: grep 192.168.72.89	control-plane.minikube.internal$ /etc/hosts
	I0420 01:26:54.336874  141746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:54.350084  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:54.466328  141746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:26:54.484511  141746 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118 for IP: 192.168.72.89
	I0420 01:26:54.484545  141746 certs.go:194] generating shared ca certs ...
	I0420 01:26:54.484609  141746 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:54.484846  141746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:26:54.484960  141746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:26:54.484996  141746 certs.go:256] generating profile certs ...
	I0420 01:26:54.485165  141746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/client.key
	I0420 01:26:54.485273  141746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.key.f8d917a4
	I0420 01:26:54.485353  141746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.key
	I0420 01:26:54.485543  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:26:54.485604  141746 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:26:54.485622  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:26:54.485667  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:26:54.485707  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:26:54.485741  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:26:54.485804  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:54.486486  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:26:54.539867  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:26:54.575443  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:26:54.609857  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:26:54.638338  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0420 01:26:54.672043  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:26:54.704197  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:26:54.733771  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0420 01:26:54.761911  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:26:54.789278  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:26:54.816890  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:26:54.845884  141746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:26:54.864508  141746 ssh_runner.go:195] Run: openssl version
	I0420 01:26:54.870717  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:26:54.883192  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.888532  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.888588  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.895258  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:26:54.907346  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:26:54.919360  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.924700  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.924773  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.931133  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:26:54.942845  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:26:54.954785  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.959769  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.959856  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.966061  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:26:54.978389  141746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:26:54.983591  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:26:54.990157  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:26:54.996977  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:26:55.004103  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:26:55.010928  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:26:55.018024  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:26:55.024639  141746 kubeadm.go:391] StartCluster: {Name:no-preload-338118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:26:55.024733  141746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:26:55.024784  141746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:55.073888  141746 cri.go:89] found id: ""
	I0420 01:26:55.073954  141746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:26:55.087179  141746 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:26:55.087199  141746 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:26:55.087208  141746 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:26:55.087255  141746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:26:55.098975  141746 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:26:55.100487  141746 kubeconfig.go:125] found "no-preload-338118" server: "https://192.168.72.89:8443"
	I0420 01:26:55.103557  141746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:26:55.114871  141746 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.89
	I0420 01:26:55.114900  141746 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:26:55.114914  141746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:26:55.114983  141746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:55.174863  141746 cri.go:89] found id: ""
	I0420 01:26:55.174969  141746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:26:55.192867  141746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:26:55.203842  141746 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:26:55.203866  141746 kubeadm.go:156] found existing configuration files:
	
	I0420 01:26:55.203919  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:26:55.214476  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:26:55.214534  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:26:55.224728  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:26:55.235353  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:26:55.235403  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:26:55.245905  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:26:55.256614  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:26:55.256678  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:26:55.266909  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:26:55.276249  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:26:55.276294  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:26:55.285758  141746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:26:55.295896  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:55.418331  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:53.394623  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:55.893492  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:53.625614  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.125487  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.625414  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:55.125150  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:55.624831  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:56.125438  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:56.625450  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.125591  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.625757  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:58.124963  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.186686  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:56.681991  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:58.682958  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:56.156484  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.376987  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.450655  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.517915  141746 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:26:56.518018  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.018277  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.518215  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.538017  141746 api_server.go:72] duration metric: took 1.020104679s to wait for apiserver process to appear ...
	I0420 01:26:57.538045  141746 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:26:57.538070  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:26:58.392944  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:00.892688  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:58.625549  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:59.125177  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:59.624704  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:00.125709  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:00.625346  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.124849  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.624947  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:02.125407  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:02.625704  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:03.125695  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.182564  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:03.183451  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:02.538442  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:02.538498  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:03.396891  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:05.896375  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:03.625423  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:04.124806  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:04.625232  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.124917  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.624983  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:06.124851  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:06.625029  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:07.125554  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:07.625163  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:08.125455  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.682216  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:07.683636  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:07.538926  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:07.538973  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:08.392765  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:10.392933  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:08.625100  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:09.125395  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:09.625454  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.125615  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.624892  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:11.125366  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:11.625074  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:12.125165  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:12.625629  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:13.124824  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.182884  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:12.683893  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:12.540046  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:12.540121  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:12.393561  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:14.893756  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:13.625040  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:14.125511  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:14.624890  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.125622  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.625393  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:16.125215  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:16.625561  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:17.125263  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:17.624772  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:18.125597  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.183734  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:17.683742  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:17.540652  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:17.540701  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.076616  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": read tcp 192.168.72.1:34174->192.168.72.89:8443: read: connection reset by peer
	I0420 01:27:18.076671  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.077186  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": dial tcp 192.168.72.89:8443: connect: connection refused
	I0420 01:27:18.538798  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.539454  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": dial tcp 192.168.72.89:8443: connect: connection refused
	I0420 01:27:19.039080  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:17.393196  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:19.395273  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:18.624948  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:19.124956  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:19.625579  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:20.124827  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:20.625212  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:21.125476  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:21.125553  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:21.174633  142411 cri.go:89] found id: ""
	I0420 01:27:21.174668  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.174679  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:21.174686  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:21.174767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:21.218230  142411 cri.go:89] found id: ""
	I0420 01:27:21.218263  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.218275  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:21.218284  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:21.218369  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:21.258886  142411 cri.go:89] found id: ""
	I0420 01:27:21.258916  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.258926  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:21.258932  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:21.259003  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:21.306725  142411 cri.go:89] found id: ""
	I0420 01:27:21.306758  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.306769  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:21.306777  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:21.306843  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:21.349049  142411 cri.go:89] found id: ""
	I0420 01:27:21.349086  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.349098  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:21.349106  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:21.349174  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:21.392312  142411 cri.go:89] found id: ""
	I0420 01:27:21.392338  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.392346  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:21.392352  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:21.392425  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:21.434121  142411 cri.go:89] found id: ""
	I0420 01:27:21.434148  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.434156  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:21.434162  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:21.434210  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:21.473728  142411 cri.go:89] found id: ""
	I0420 01:27:21.473754  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.473762  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:21.473772  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:21.473785  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:21.537607  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:21.537648  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:21.554563  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:21.554604  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:21.674778  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:21.674803  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:21.674829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:21.740625  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:21.740666  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:20.182461  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:22.682574  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:24.039641  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:24.039690  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:21.397381  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:23.893642  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:24.284890  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:24.301486  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:24.301571  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:24.340987  142411 cri.go:89] found id: ""
	I0420 01:27:24.341012  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.341021  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:24.341026  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:24.341102  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:24.379983  142411 cri.go:89] found id: ""
	I0420 01:27:24.380014  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.380024  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:24.380029  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:24.380113  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:24.438700  142411 cri.go:89] found id: ""
	I0420 01:27:24.438729  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.438739  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:24.438745  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:24.438795  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:24.487761  142411 cri.go:89] found id: ""
	I0420 01:27:24.487793  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.487802  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:24.487808  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:24.487870  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:24.529408  142411 cri.go:89] found id: ""
	I0420 01:27:24.529439  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.529448  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:24.529453  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:24.529523  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:24.572782  142411 cri.go:89] found id: ""
	I0420 01:27:24.572817  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.572831  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:24.572841  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:24.572910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:24.620651  142411 cri.go:89] found id: ""
	I0420 01:27:24.620684  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.620696  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:24.620704  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:24.620769  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:24.659481  142411 cri.go:89] found id: ""
	I0420 01:27:24.659513  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.659525  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:24.659537  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:24.659552  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:24.714483  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:24.714517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:24.730279  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:24.730316  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:24.804883  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:24.804909  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:24.804926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:24.879557  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:24.879602  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:27.431026  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:27.448112  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:27.448176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:27.494959  142411 cri.go:89] found id: ""
	I0420 01:27:27.494988  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.494999  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:27.495007  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:27.495075  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:27.532023  142411 cri.go:89] found id: ""
	I0420 01:27:27.532055  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.532066  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:27.532075  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:27.532151  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:27.578551  142411 cri.go:89] found id: ""
	I0420 01:27:27.578600  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.578613  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:27.578621  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:27.578692  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:27.618248  142411 cri.go:89] found id: ""
	I0420 01:27:27.618277  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.618288  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:27.618296  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:27.618363  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:27.655682  142411 cri.go:89] found id: ""
	I0420 01:27:27.655714  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.655723  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:27.655729  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:27.655787  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:27.696355  142411 cri.go:89] found id: ""
	I0420 01:27:27.696389  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.696400  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:27.696408  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:27.696478  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:27.735354  142411 cri.go:89] found id: ""
	I0420 01:27:27.735378  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.735396  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:27.735402  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:27.735460  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:27.775234  142411 cri.go:89] found id: ""
	I0420 01:27:27.775261  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.775269  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:27.775277  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:27.775294  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:27.789970  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:27.790005  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:27.873345  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:27.873371  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:27.873387  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:27.952309  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:27.952353  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:28.003746  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:28.003792  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:24.683122  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:27.182311  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:29.040691  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:29.040743  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:26.394161  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:28.893349  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:30.893785  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:30.555691  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:30.570962  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:30.571041  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:30.613185  142411 cri.go:89] found id: ""
	I0420 01:27:30.613218  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.613227  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:30.613233  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:30.613291  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:30.654494  142411 cri.go:89] found id: ""
	I0420 01:27:30.654520  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.654529  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:30.654535  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:30.654600  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:30.702605  142411 cri.go:89] found id: ""
	I0420 01:27:30.702634  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.702646  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:30.702653  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:30.702719  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:30.742072  142411 cri.go:89] found id: ""
	I0420 01:27:30.742104  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.742115  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:30.742123  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:30.742191  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:30.793199  142411 cri.go:89] found id: ""
	I0420 01:27:30.793232  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.793244  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:30.793252  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:30.793340  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:30.832978  142411 cri.go:89] found id: ""
	I0420 01:27:30.833019  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.833034  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:30.833044  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:30.833126  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:30.875606  142411 cri.go:89] found id: ""
	I0420 01:27:30.875641  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.875655  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:30.875662  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:30.875729  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:30.917288  142411 cri.go:89] found id: ""
	I0420 01:27:30.917335  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.917348  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:30.917360  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:30.917375  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:30.996446  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:30.996469  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:30.996485  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:31.080494  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:31.080543  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:31.141226  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:31.141260  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:31.212808  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:31.212845  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:29.182651  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:31.183179  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:33.682476  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:34.041737  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:34.041789  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:33.393756  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:35.395120  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:33.728927  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:33.745749  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:33.745835  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:33.788813  142411 cri.go:89] found id: ""
	I0420 01:27:33.788845  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.788859  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:33.788868  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:33.788936  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:33.834918  142411 cri.go:89] found id: ""
	I0420 01:27:33.834948  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.834957  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:33.834963  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:33.835026  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:33.873928  142411 cri.go:89] found id: ""
	I0420 01:27:33.873960  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.873972  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:33.873977  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:33.874027  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:33.921462  142411 cri.go:89] found id: ""
	I0420 01:27:33.921497  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.921510  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:33.921519  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:33.921606  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:33.962280  142411 cri.go:89] found id: ""
	I0420 01:27:33.962308  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.962320  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:33.962329  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:33.962390  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:34.002582  142411 cri.go:89] found id: ""
	I0420 01:27:34.002616  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.002627  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:34.002635  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:34.002707  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:34.047383  142411 cri.go:89] found id: ""
	I0420 01:27:34.047410  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.047421  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:34.047428  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:34.047489  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:34.088296  142411 cri.go:89] found id: ""
	I0420 01:27:34.088341  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.088352  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:34.088364  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:34.088381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:34.180338  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:34.180380  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:34.224386  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:34.224422  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:34.278451  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:34.278488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:34.294377  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:34.294409  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:34.377115  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:36.878000  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:36.896875  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:36.896953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:36.953915  142411 cri.go:89] found id: ""
	I0420 01:27:36.953954  142411 logs.go:276] 0 containers: []
	W0420 01:27:36.953968  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:36.953977  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:36.954056  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:36.998223  142411 cri.go:89] found id: ""
	I0420 01:27:36.998250  142411 logs.go:276] 0 containers: []
	W0420 01:27:36.998260  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:36.998268  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:36.998337  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:37.069299  142411 cri.go:89] found id: ""
	I0420 01:27:37.069346  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.069358  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:37.069366  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:37.069436  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:37.112068  142411 cri.go:89] found id: ""
	I0420 01:27:37.112100  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.112112  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:37.112119  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:37.112175  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:37.155883  142411 cri.go:89] found id: ""
	I0420 01:27:37.155913  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.155924  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:37.155933  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:37.156006  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:37.200979  142411 cri.go:89] found id: ""
	I0420 01:27:37.201007  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.201018  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:37.201026  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:37.201091  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:37.241639  142411 cri.go:89] found id: ""
	I0420 01:27:37.241667  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.241678  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:37.241686  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:37.241748  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:37.281845  142411 cri.go:89] found id: ""
	I0420 01:27:37.281883  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.281894  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:37.281907  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:37.281923  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:37.327428  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:37.327463  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:37.385213  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:37.385248  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:37.400158  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:37.400190  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:37.476662  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:37.476687  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:37.476700  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:37.090819  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:27:37.090858  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:27:37.090877  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:37.124020  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:37.124076  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:37.538389  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:37.550894  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:37.550930  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:38.038486  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:38.051983  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:38.052019  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:38.538297  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:38.544961  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 200:
	ok
	I0420 01:27:38.553038  141746 api_server.go:141] control plane version: v1.30.0
	I0420 01:27:38.553065  141746 api_server.go:131] duration metric: took 41.015012791s to wait for apiserver health ...
	I0420 01:27:38.553075  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:27:38.553081  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:27:38.554687  141746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:27:35.684396  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:38.183391  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:38.555934  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:27:38.575384  141746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:27:38.609934  141746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:27:38.637152  141746 system_pods.go:59] 8 kube-system pods found
	I0420 01:27:38.637184  141746 system_pods.go:61] "coredns-7db6d8ff4d-r2hs7" [981840a2-82cd-49e0-8d4f-fbaf05290668] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:27:38.637191  141746 system_pods.go:61] "etcd-no-preload-338118" [92fc0da4-63d3-4f34-a5a6-27b73e7e210d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:27:38.637198  141746 system_pods.go:61] "kube-apiserver-no-preload-338118" [9f7bd5df-f733-4944-9ad2-0c9f0ea4529b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:27:38.637206  141746 system_pods.go:61] "kube-controller-manager-no-preload-338118" [d7a0bd6a-2cd0-4b27-ae83-ae38c1a20c63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:27:38.637215  141746 system_pods.go:61] "kube-proxy-zgq86" [d379ae65-c579-47e4-b055-6512e74868a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0420 01:27:38.637219  141746 system_pods.go:61] "kube-scheduler-no-preload-338118" [99558213-289d-4682-ba8e-20175c815563] Running
	I0420 01:27:38.637225  141746 system_pods.go:61] "metrics-server-569cc877fc-lcbcz" [1d2b716a-555a-46aa-ae27-c40553c94288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:27:38.637229  141746 system_pods.go:61] "storage-provisioner" [a8316010-8689-42aa-9741-227bf55a16bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:27:38.637236  141746 system_pods.go:74] duration metric: took 27.280844ms to wait for pod list to return data ...
	I0420 01:27:38.637243  141746 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:27:38.640744  141746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:27:38.640774  141746 node_conditions.go:123] node cpu capacity is 2
	I0420 01:27:38.640791  141746 node_conditions.go:105] duration metric: took 3.542872ms to run NodePressure ...
	I0420 01:27:38.640813  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:27:38.979785  141746 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:27:38.987541  141746 kubeadm.go:733] kubelet initialised
	I0420 01:27:38.987570  141746 kubeadm.go:734] duration metric: took 7.752383ms waiting for restarted kubelet to initialise ...
	I0420 01:27:38.987582  141746 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:27:38.994929  141746 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:38.999872  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:38.999903  141746 pod_ready.go:81] duration metric: took 4.940439ms for pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:38.999915  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:38.999923  141746 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.004575  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "etcd-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.004595  141746 pod_ready.go:81] duration metric: took 4.662163ms for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.004603  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "etcd-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.004608  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.012365  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "kube-apiserver-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.012386  141746 pod_ready.go:81] duration metric: took 7.773001ms for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.012393  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "kube-apiserver-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.012400  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.019091  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.019125  141746 pod_ready.go:81] duration metric: took 6.70398ms for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.019137  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.019146  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zgq86" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:37.894228  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:39.899004  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:40.075888  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:40.091313  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:40.091389  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:40.134013  142411 cri.go:89] found id: ""
	I0420 01:27:40.134039  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.134048  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:40.134053  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:40.134136  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:40.182108  142411 cri.go:89] found id: ""
	I0420 01:27:40.182140  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.182151  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:40.182158  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:40.182222  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:40.225406  142411 cri.go:89] found id: ""
	I0420 01:27:40.225438  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.225447  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:40.225453  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:40.225539  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:40.267599  142411 cri.go:89] found id: ""
	I0420 01:27:40.267627  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.267636  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:40.267645  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:40.267790  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:40.309385  142411 cri.go:89] found id: ""
	I0420 01:27:40.309418  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.309439  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:40.309448  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:40.309525  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:40.351947  142411 cri.go:89] found id: ""
	I0420 01:27:40.351980  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.351993  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:40.352003  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:40.352079  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:40.395583  142411 cri.go:89] found id: ""
	I0420 01:27:40.395614  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.395623  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:40.395629  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:40.395692  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:40.441348  142411 cri.go:89] found id: ""
	I0420 01:27:40.441397  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.441412  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:40.441426  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:40.441445  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:40.498231  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:40.498268  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:40.514550  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:40.514578  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:40.593580  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:40.593614  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:40.593631  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:40.671736  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:40.671778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:43.224892  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:43.240876  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:43.240939  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:43.281583  142411 cri.go:89] found id: ""
	I0420 01:27:43.281621  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.281634  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:43.281643  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:43.281705  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:43.321079  142411 cri.go:89] found id: ""
	I0420 01:27:43.321115  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.321125  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:43.321132  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:43.321277  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:43.365827  142411 cri.go:89] found id: ""
	I0420 01:27:43.365855  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.365864  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:43.365870  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:43.365921  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:40.184872  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:42.683826  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:41.025729  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:43.025868  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:45.526436  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:42.393681  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:44.401124  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:43.404317  142411 cri.go:89] found id: ""
	I0420 01:27:43.404349  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.404361  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:43.404370  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:43.404443  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:43.449268  142411 cri.go:89] found id: ""
	I0420 01:27:43.449299  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.449323  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:43.449331  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:43.449408  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:43.487782  142411 cri.go:89] found id: ""
	I0420 01:27:43.487829  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.487837  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:43.487844  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:43.487909  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:43.526650  142411 cri.go:89] found id: ""
	I0420 01:27:43.526677  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.526688  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:43.526695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:43.526755  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:43.565288  142411 cri.go:89] found id: ""
	I0420 01:27:43.565328  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.565340  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:43.565352  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:43.565368  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:43.618013  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:43.618046  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:43.634064  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:43.634101  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:43.710633  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:43.710663  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:43.710679  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:43.796658  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:43.796709  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:46.352329  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:46.366848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:46.366935  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:46.413643  142411 cri.go:89] found id: ""
	I0420 01:27:46.413676  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.413687  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:46.413695  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:46.413762  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:46.457976  142411 cri.go:89] found id: ""
	I0420 01:27:46.458002  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.458011  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:46.458020  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:46.458086  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:46.500291  142411 cri.go:89] found id: ""
	I0420 01:27:46.500317  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.500328  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:46.500334  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:46.500398  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:46.541279  142411 cri.go:89] found id: ""
	I0420 01:27:46.541331  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.541343  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:46.541359  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:46.541442  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:46.585613  142411 cri.go:89] found id: ""
	I0420 01:27:46.585642  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.585654  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:46.585661  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:46.585726  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:46.634400  142411 cri.go:89] found id: ""
	I0420 01:27:46.634430  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.634441  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:46.634450  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:46.634534  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:46.676276  142411 cri.go:89] found id: ""
	I0420 01:27:46.676305  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.676313  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:46.676320  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:46.676380  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:46.719323  142411 cri.go:89] found id: ""
	I0420 01:27:46.719356  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.719369  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:46.719381  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:46.719398  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:46.799735  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:46.799765  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:46.799790  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:46.878323  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:46.878371  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:46.931870  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:46.931902  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:46.983217  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:46.983250  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:45.182485  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:47.183499  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:47.526708  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:50.034262  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:46.897249  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:49.393599  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:49.500147  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:49.517380  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:49.517461  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:49.561300  142411 cri.go:89] found id: ""
	I0420 01:27:49.561347  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.561358  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:49.561365  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:49.561432  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:49.604569  142411 cri.go:89] found id: ""
	I0420 01:27:49.604594  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.604608  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:49.604614  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:49.604664  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:49.644952  142411 cri.go:89] found id: ""
	I0420 01:27:49.644983  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.644999  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:49.645006  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:49.645071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:49.694719  142411 cri.go:89] found id: ""
	I0420 01:27:49.694749  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.694757  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:49.694764  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:49.694815  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:49.743821  142411 cri.go:89] found id: ""
	I0420 01:27:49.743849  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.743857  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:49.743865  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:49.743936  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:49.789125  142411 cri.go:89] found id: ""
	I0420 01:27:49.789152  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.789161  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:49.789167  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:49.789233  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:49.828794  142411 cri.go:89] found id: ""
	I0420 01:27:49.828829  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.828841  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:49.828848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:49.828913  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:49.873335  142411 cri.go:89] found id: ""
	I0420 01:27:49.873366  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.873375  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:49.873385  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:49.873397  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:49.930590  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:49.930632  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:49.946850  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:49.946889  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:50.039200  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:50.039220  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:50.039236  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:50.122067  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:50.122118  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:52.664342  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:52.682978  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:52.683061  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:52.733806  142411 cri.go:89] found id: ""
	I0420 01:27:52.733836  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.733848  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:52.733855  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:52.733921  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:52.785977  142411 cri.go:89] found id: ""
	I0420 01:27:52.786008  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.786020  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:52.786027  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:52.786092  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:52.826957  142411 cri.go:89] found id: ""
	I0420 01:27:52.826987  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.826995  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:52.827001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:52.827056  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:52.876208  142411 cri.go:89] found id: ""
	I0420 01:27:52.876251  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.876265  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:52.876276  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:52.876354  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:52.918629  142411 cri.go:89] found id: ""
	I0420 01:27:52.918666  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.918679  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:52.918687  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:52.918767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:52.967604  142411 cri.go:89] found id: ""
	I0420 01:27:52.967646  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.967655  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:52.967661  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:52.967729  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:53.010948  142411 cri.go:89] found id: ""
	I0420 01:27:53.010975  142411 logs.go:276] 0 containers: []
	W0420 01:27:53.010983  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:53.010988  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:53.011039  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:53.055569  142411 cri.go:89] found id: ""
	I0420 01:27:53.055594  142411 logs.go:276] 0 containers: []
	W0420 01:27:53.055611  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:53.055620  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:53.055633  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:53.071038  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:53.071067  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:53.151334  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:53.151364  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:53.151381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:53.238509  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:53.238553  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:53.284898  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:53.284945  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:49.183562  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.682524  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:53.684003  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.027739  141746 pod_ready.go:92] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"True"
	I0420 01:27:51.027773  141746 pod_ready.go:81] duration metric: took 12.008613872s for pod "kube-proxy-zgq86" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.027785  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.033100  141746 pod_ready.go:92] pod "kube-scheduler-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:27:51.033124  141746 pod_ready.go:81] duration metric: took 5.331694ms for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.033136  141746 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:53.041387  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:55.542345  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.896822  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:54.395015  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:55.843065  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:55.856928  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:55.857001  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:55.903058  142411 cri.go:89] found id: ""
	I0420 01:27:55.903092  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.903103  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:55.903111  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:55.903170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:55.944369  142411 cri.go:89] found id: ""
	I0420 01:27:55.944402  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.944414  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:55.944421  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:55.944474  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:55.983485  142411 cri.go:89] found id: ""
	I0420 01:27:55.983510  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.983517  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:55.983523  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:55.983571  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:56.021931  142411 cri.go:89] found id: ""
	I0420 01:27:56.021956  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.021964  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:56.021970  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:56.022019  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:56.066671  142411 cri.go:89] found id: ""
	I0420 01:27:56.066705  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.066717  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:56.066724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:56.066788  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:56.107724  142411 cri.go:89] found id: ""
	I0420 01:27:56.107783  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.107794  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:56.107800  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:56.107854  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:56.149201  142411 cri.go:89] found id: ""
	I0420 01:27:56.149234  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.149246  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:56.149255  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:56.149328  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:56.189580  142411 cri.go:89] found id: ""
	I0420 01:27:56.189621  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.189633  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:56.189645  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:56.189661  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:56.243425  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:56.243462  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:56.261043  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:56.261079  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:56.341944  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:56.341967  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:56.341980  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:56.423252  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:56.423294  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:55.684408  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.183545  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:57.542492  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:00.040617  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:56.892991  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.893124  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:00.893660  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.968894  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:58.984559  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:58.984648  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:59.021603  142411 cri.go:89] found id: ""
	I0420 01:27:59.021634  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.021655  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:59.021666  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:59.021756  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:59.061592  142411 cri.go:89] found id: ""
	I0420 01:27:59.061626  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.061642  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:59.061649  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:59.061701  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:59.101956  142411 cri.go:89] found id: ""
	I0420 01:27:59.101986  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.101996  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:59.102003  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:59.102072  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:59.141104  142411 cri.go:89] found id: ""
	I0420 01:27:59.141136  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.141145  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:59.141151  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:59.141221  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:59.188973  142411 cri.go:89] found id: ""
	I0420 01:27:59.189005  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.189014  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:59.189022  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:59.189107  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:59.232598  142411 cri.go:89] found id: ""
	I0420 01:27:59.232632  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.232641  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:59.232647  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:59.232704  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:59.272623  142411 cri.go:89] found id: ""
	I0420 01:27:59.272660  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.272669  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:59.272675  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:59.272739  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:59.309951  142411 cri.go:89] found id: ""
	I0420 01:27:59.309977  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.309984  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:59.309994  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:59.310005  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:59.366589  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:59.366626  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:59.382724  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:59.382756  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:59.461072  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:59.461102  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:59.461122  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:59.544736  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:59.544769  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:02.089118  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:02.105402  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:02.105483  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:02.144665  142411 cri.go:89] found id: ""
	I0420 01:28:02.144691  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.144700  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:02.144706  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:02.144759  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:02.187471  142411 cri.go:89] found id: ""
	I0420 01:28:02.187498  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.187508  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:02.187515  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:02.187576  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:02.229206  142411 cri.go:89] found id: ""
	I0420 01:28:02.229233  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.229241  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:02.229247  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:02.229335  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:02.279425  142411 cri.go:89] found id: ""
	I0420 01:28:02.279464  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.279478  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:02.279488  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:02.279577  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:02.323033  142411 cri.go:89] found id: ""
	I0420 01:28:02.323066  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.323082  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:02.323090  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:02.323155  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:02.360121  142411 cri.go:89] found id: ""
	I0420 01:28:02.360158  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.360170  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:02.360178  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:02.360244  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:02.398756  142411 cri.go:89] found id: ""
	I0420 01:28:02.398786  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.398797  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:02.398804  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:02.398867  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:02.437982  142411 cri.go:89] found id: ""
	I0420 01:28:02.438010  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.438018  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:02.438028  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:02.438041  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:02.489396  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:02.489434  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:02.506764  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:02.506796  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:02.591894  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:02.591915  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:02.591929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:02.675241  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:02.675281  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:00.683139  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:02.684787  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:02.540829  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.041823  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:03.393076  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.396351  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.224296  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:05.238522  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:05.238593  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:05.278495  142411 cri.go:89] found id: ""
	I0420 01:28:05.278529  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.278540  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:05.278549  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:05.278621  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:05.318096  142411 cri.go:89] found id: ""
	I0420 01:28:05.318122  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.318130  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:05.318136  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:05.318196  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:05.358607  142411 cri.go:89] found id: ""
	I0420 01:28:05.358636  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.358653  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:05.358658  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:05.358749  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:05.417163  142411 cri.go:89] found id: ""
	I0420 01:28:05.417199  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.417211  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:05.417218  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:05.417284  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:05.468566  142411 cri.go:89] found id: ""
	I0420 01:28:05.468599  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.468610  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:05.468619  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:05.468691  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:05.514005  142411 cri.go:89] found id: ""
	I0420 01:28:05.514037  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.514047  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:05.514055  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:05.514112  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:05.554972  142411 cri.go:89] found id: ""
	I0420 01:28:05.555001  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.555012  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:05.555020  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:05.555083  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:05.596736  142411 cri.go:89] found id: ""
	I0420 01:28:05.596764  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.596773  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:05.596787  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:05.596800  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:05.649680  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:05.649719  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:05.667583  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:05.667614  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:05.743886  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:05.743922  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:05.743939  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:05.827827  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:05.827863  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:08.384615  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:05.181917  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.182902  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.541045  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:09.542114  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.892610  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:10.392899  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:08.401190  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:08.403071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:08.445453  142411 cri.go:89] found id: ""
	I0420 01:28:08.445486  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.445497  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:08.445505  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:08.445573  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:08.487598  142411 cri.go:89] found id: ""
	I0420 01:28:08.487636  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.487649  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:08.487657  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:08.487727  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:08.531416  142411 cri.go:89] found id: ""
	I0420 01:28:08.531445  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.531457  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:08.531465  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:08.531526  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:08.574964  142411 cri.go:89] found id: ""
	I0420 01:28:08.575000  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.575012  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:08.575020  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:08.575075  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:08.612644  142411 cri.go:89] found id: ""
	I0420 01:28:08.612679  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.612688  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:08.612695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:08.612748  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:08.651775  142411 cri.go:89] found id: ""
	I0420 01:28:08.651800  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.651811  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:08.651817  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:08.651869  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:08.692869  142411 cri.go:89] found id: ""
	I0420 01:28:08.692894  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.692902  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:08.692908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:08.692957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:08.731765  142411 cri.go:89] found id: ""
	I0420 01:28:08.731794  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.731805  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:08.731817  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:08.731836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:08.747401  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:08.747445  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:08.831069  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:08.831091  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:08.831110  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:08.919053  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:08.919095  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:08.965814  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:08.965854  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:11.518303  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:11.535213  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:11.535294  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:11.577182  142411 cri.go:89] found id: ""
	I0420 01:28:11.577214  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.577223  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:11.577229  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:11.577289  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:11.615023  142411 cri.go:89] found id: ""
	I0420 01:28:11.615055  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.615064  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:11.615070  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:11.615138  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:11.654062  142411 cri.go:89] found id: ""
	I0420 01:28:11.654089  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.654097  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:11.654104  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:11.654170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:11.700846  142411 cri.go:89] found id: ""
	I0420 01:28:11.700875  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.700885  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:11.700892  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:11.700966  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:11.743061  142411 cri.go:89] found id: ""
	I0420 01:28:11.743089  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.743100  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:11.743109  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:11.743175  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:11.783651  142411 cri.go:89] found id: ""
	I0420 01:28:11.783687  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.783698  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:11.783706  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:11.783781  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:11.827099  142411 cri.go:89] found id: ""
	I0420 01:28:11.827130  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.827139  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:11.827144  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:11.827197  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:11.867476  142411 cri.go:89] found id: ""
	I0420 01:28:11.867510  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.867523  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:11.867535  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:11.867554  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:11.920211  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:11.920246  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:11.937632  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:11.937670  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:12.014917  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:12.014940  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:12.014955  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:12.096549  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:12.096586  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:09.684447  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.183063  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.041220  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:14.540620  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.893441  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:15.408953  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:14.653783  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:14.667893  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:14.667955  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:14.710098  142411 cri.go:89] found id: ""
	I0420 01:28:14.710153  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.710164  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:14.710172  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:14.710240  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:14.750891  142411 cri.go:89] found id: ""
	I0420 01:28:14.750920  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.750929  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:14.750939  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:14.751010  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:14.794062  142411 cri.go:89] found id: ""
	I0420 01:28:14.794103  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.794127  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:14.794135  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:14.794204  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:14.834333  142411 cri.go:89] found id: ""
	I0420 01:28:14.834363  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.834375  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:14.834383  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:14.834446  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:14.874114  142411 cri.go:89] found id: ""
	I0420 01:28:14.874148  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.874160  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:14.874168  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:14.874238  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:14.912685  142411 cri.go:89] found id: ""
	I0420 01:28:14.912711  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.912720  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:14.912726  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:14.912787  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:14.954050  142411 cri.go:89] found id: ""
	I0420 01:28:14.954076  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.954083  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:14.954089  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:14.954150  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:14.992310  142411 cri.go:89] found id: ""
	I0420 01:28:14.992348  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.992357  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:14.992365  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:14.992388  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:15.047471  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:15.047512  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:15.065800  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:15.065842  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:15.146009  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:15.146037  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:15.146058  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:15.232920  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:15.232962  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:17.781215  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:17.797404  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:17.797466  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:17.840532  142411 cri.go:89] found id: ""
	I0420 01:28:17.840564  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.840573  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:17.840579  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:17.840636  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:17.881562  142411 cri.go:89] found id: ""
	I0420 01:28:17.881588  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.881596  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:17.881602  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:17.881651  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:17.935068  142411 cri.go:89] found id: ""
	I0420 01:28:17.935098  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.935108  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:17.935115  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:17.935177  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:17.980745  142411 cri.go:89] found id: ""
	I0420 01:28:17.980782  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.980795  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:17.980804  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:17.980880  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:18.051120  142411 cri.go:89] found id: ""
	I0420 01:28:18.051153  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.051164  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:18.051171  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:18.051235  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:18.091741  142411 cri.go:89] found id: ""
	I0420 01:28:18.091776  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.091788  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:18.091796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:18.091864  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:18.133438  142411 cri.go:89] found id: ""
	I0420 01:28:18.133472  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.133482  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:18.133488  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:18.133560  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:18.174624  142411 cri.go:89] found id: ""
	I0420 01:28:18.174665  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.174679  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:18.174694  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:18.174713  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:18.228519  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:18.228563  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:18.246452  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:18.246487  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:18.322051  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:18.322074  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:18.322088  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:14.684817  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:17.182405  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:16.541139  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:19.041191  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:17.895052  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:19.895901  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:18.404873  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:18.404904  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:20.950553  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:20.965081  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:20.965139  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:21.007198  142411 cri.go:89] found id: ""
	I0420 01:28:21.007243  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.007255  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:21.007263  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:21.007330  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:21.050991  142411 cri.go:89] found id: ""
	I0420 01:28:21.051019  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.051028  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:21.051034  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:21.051104  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:21.091953  142411 cri.go:89] found id: ""
	I0420 01:28:21.091986  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.091995  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:21.092001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:21.092085  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:21.134134  142411 cri.go:89] found id: ""
	I0420 01:28:21.134164  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.134174  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:21.134181  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:21.134251  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:21.173698  142411 cri.go:89] found id: ""
	I0420 01:28:21.173724  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.173731  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:21.173737  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:21.173801  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:21.221327  142411 cri.go:89] found id: ""
	I0420 01:28:21.221354  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.221362  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:21.221369  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:21.221428  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:21.262752  142411 cri.go:89] found id: ""
	I0420 01:28:21.262780  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.262791  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:21.262798  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:21.262851  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:21.303497  142411 cri.go:89] found id: ""
	I0420 01:28:21.303524  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.303535  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:21.303547  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:21.303563  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:21.358231  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:21.358265  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:21.373723  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:21.373753  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:21.465016  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:21.465044  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:21.465061  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:21.552087  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:21.552117  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:19.683617  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:22.182720  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:21.540588  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.039211  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:22.393170  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.396378  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.099938  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:24.116967  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:24.117045  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:24.159458  142411 cri.go:89] found id: ""
	I0420 01:28:24.159491  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.159501  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:24.159508  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:24.159574  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:24.206028  142411 cri.go:89] found id: ""
	I0420 01:28:24.206054  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.206065  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:24.206072  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:24.206137  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:24.248047  142411 cri.go:89] found id: ""
	I0420 01:28:24.248088  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.248101  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:24.248109  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:24.248176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:24.287867  142411 cri.go:89] found id: ""
	I0420 01:28:24.287898  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.287909  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:24.287917  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:24.287995  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:24.329399  142411 cri.go:89] found id: ""
	I0420 01:28:24.329433  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.329444  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:24.329452  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:24.329519  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:24.367846  142411 cri.go:89] found id: ""
	I0420 01:28:24.367871  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.367882  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:24.367889  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:24.367960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:24.414245  142411 cri.go:89] found id: ""
	I0420 01:28:24.414272  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.414283  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:24.414291  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:24.414354  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:24.453268  142411 cri.go:89] found id: ""
	I0420 01:28:24.453302  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.453331  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:24.453344  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:24.453366  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:24.514501  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:24.514546  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:24.529551  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:24.529591  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:24.613734  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:24.613757  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:24.613775  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:24.693804  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:24.693843  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:27.238443  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:27.254172  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:27.254235  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:27.297048  142411 cri.go:89] found id: ""
	I0420 01:28:27.297101  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.297111  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:27.297119  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:27.297181  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:27.340145  142411 cri.go:89] found id: ""
	I0420 01:28:27.340171  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.340181  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:27.340189  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:27.340316  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:27.383047  142411 cri.go:89] found id: ""
	I0420 01:28:27.383077  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.383089  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:27.383096  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:27.383169  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:27.428088  142411 cri.go:89] found id: ""
	I0420 01:28:27.428122  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.428134  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:27.428142  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:27.428206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:27.468257  142411 cri.go:89] found id: ""
	I0420 01:28:27.468300  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.468310  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:27.468317  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:27.468389  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:27.508834  142411 cri.go:89] found id: ""
	I0420 01:28:27.508873  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.508885  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:27.508892  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:27.508953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:27.548853  142411 cri.go:89] found id: ""
	I0420 01:28:27.548893  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.548901  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:27.548908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:27.548956  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:27.587841  142411 cri.go:89] found id: ""
	I0420 01:28:27.587875  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.587886  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:27.587899  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:27.587917  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:27.667848  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:27.667888  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:27.714820  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:27.714856  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:27.766337  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:27.766381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:27.782585  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:27.782627  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:27.856172  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:24.184768  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.683097  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.040531  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:28.040802  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:30.542386  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.893091  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:29.393546  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:30.356809  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:30.372449  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:30.372529  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:30.422164  142411 cri.go:89] found id: ""
	I0420 01:28:30.422198  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.422209  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:30.422218  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:30.422283  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:30.460367  142411 cri.go:89] found id: ""
	I0420 01:28:30.460395  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.460404  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:30.460411  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:30.460498  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:30.508423  142411 cri.go:89] found id: ""
	I0420 01:28:30.508460  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.508471  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:30.508479  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:30.508546  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:30.553124  142411 cri.go:89] found id: ""
	I0420 01:28:30.553152  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.553161  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:30.553167  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:30.553225  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:30.601866  142411 cri.go:89] found id: ""
	I0420 01:28:30.601908  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.601919  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:30.601939  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:30.602014  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:30.645413  142411 cri.go:89] found id: ""
	I0420 01:28:30.645446  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.645457  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:30.645467  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:30.645539  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:30.690955  142411 cri.go:89] found id: ""
	I0420 01:28:30.690988  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.690997  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:30.691006  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:30.691077  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:30.732146  142411 cri.go:89] found id: ""
	I0420 01:28:30.732186  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.732197  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:30.732209  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:30.732228  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:30.786890  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:30.786928  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:30.802887  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:30.802920  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:30.884422  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:30.884447  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:30.884461  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:30.967504  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:30.967540  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:29.183645  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:31.683218  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.684335  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.044031  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:35.540100  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:31.897363  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:34.392658  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.515720  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:33.531895  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:33.531953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:33.574626  142411 cri.go:89] found id: ""
	I0420 01:28:33.574668  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.574682  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:33.574690  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:33.574757  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:33.620527  142411 cri.go:89] found id: ""
	I0420 01:28:33.620553  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.620562  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:33.620568  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:33.620630  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:33.659685  142411 cri.go:89] found id: ""
	I0420 01:28:33.659711  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.659719  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:33.659724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:33.659773  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:33.699390  142411 cri.go:89] found id: ""
	I0420 01:28:33.699414  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.699422  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:33.699427  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:33.699485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:33.743819  142411 cri.go:89] found id: ""
	I0420 01:28:33.743844  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.743852  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:33.743858  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:33.743907  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:33.788416  142411 cri.go:89] found id: ""
	I0420 01:28:33.788442  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.788450  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:33.788456  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:33.788514  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:33.834105  142411 cri.go:89] found id: ""
	I0420 01:28:33.834129  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.834138  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:33.834144  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:33.834206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:33.884118  142411 cri.go:89] found id: ""
	I0420 01:28:33.884152  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.884164  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:33.884176  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:33.884193  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:33.940493  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:33.940525  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:33.954800  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:33.954829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:34.030788  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:34.030812  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:34.030829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:34.119533  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:34.119574  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:36.667132  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:36.684253  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:36.684334  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:36.723598  142411 cri.go:89] found id: ""
	I0420 01:28:36.723629  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.723641  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:36.723649  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:36.723718  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:36.761563  142411 cri.go:89] found id: ""
	I0420 01:28:36.761594  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.761606  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:36.761614  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:36.761679  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:36.803553  142411 cri.go:89] found id: ""
	I0420 01:28:36.803590  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.803603  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:36.803611  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:36.803674  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:36.840368  142411 cri.go:89] found id: ""
	I0420 01:28:36.840407  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.840421  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:36.840430  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:36.840497  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:36.879689  142411 cri.go:89] found id: ""
	I0420 01:28:36.879724  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.879735  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:36.879743  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:36.879807  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:36.920757  142411 cri.go:89] found id: ""
	I0420 01:28:36.920785  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.920796  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:36.920809  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:36.920871  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:36.957522  142411 cri.go:89] found id: ""
	I0420 01:28:36.957548  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.957556  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:36.957562  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:36.957624  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:36.997358  142411 cri.go:89] found id: ""
	I0420 01:28:36.997390  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.997400  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:36.997409  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:36.997422  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:37.055063  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:37.055105  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:37.070691  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:37.070720  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:37.150114  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:37.150140  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:37.150152  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:37.228676  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:37.228711  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:36.182514  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.183398  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.040622  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:40.539486  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:36.395217  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.893457  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:40.894381  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:39.776620  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:39.792201  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:39.792268  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:39.831544  142411 cri.go:89] found id: ""
	I0420 01:28:39.831568  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.831576  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:39.831588  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:39.831652  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:39.869458  142411 cri.go:89] found id: ""
	I0420 01:28:39.869488  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.869496  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:39.869503  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:39.869564  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:39.911588  142411 cri.go:89] found id: ""
	I0420 01:28:39.911615  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.911626  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:39.911633  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:39.911703  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:39.952458  142411 cri.go:89] found id: ""
	I0420 01:28:39.952489  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.952505  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:39.952513  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:39.952580  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:39.992988  142411 cri.go:89] found id: ""
	I0420 01:28:39.993016  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.993023  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:39.993029  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:39.993117  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:40.038306  142411 cri.go:89] found id: ""
	I0420 01:28:40.038348  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.038359  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:40.038367  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:40.038432  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:40.082185  142411 cri.go:89] found id: ""
	I0420 01:28:40.082219  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.082230  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:40.082238  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:40.082332  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:40.120346  142411 cri.go:89] found id: ""
	I0420 01:28:40.120373  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.120382  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:40.120391  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:40.120405  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:40.173735  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:40.173769  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:40.191808  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:40.191844  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:40.271429  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:40.271456  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:40.271473  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:40.361519  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:40.361558  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:42.938354  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:42.953088  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:42.953167  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:42.992539  142411 cri.go:89] found id: ""
	I0420 01:28:42.992564  142411 logs.go:276] 0 containers: []
	W0420 01:28:42.992571  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:42.992577  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:42.992637  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:43.032017  142411 cri.go:89] found id: ""
	I0420 01:28:43.032059  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.032074  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:43.032082  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:43.032142  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:43.077229  142411 cri.go:89] found id: ""
	I0420 01:28:43.077258  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.077266  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:43.077272  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:43.077342  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:43.117107  142411 cri.go:89] found id: ""
	I0420 01:28:43.117128  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.117139  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:43.117145  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:43.117206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:43.156262  142411 cri.go:89] found id: ""
	I0420 01:28:43.156297  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.156310  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:43.156317  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:43.156384  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:43.195897  142411 cri.go:89] found id: ""
	I0420 01:28:43.195927  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.195935  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:43.195942  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:43.195990  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:43.230468  142411 cri.go:89] found id: ""
	I0420 01:28:43.230498  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.230513  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:43.230522  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:43.230586  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:43.271980  142411 cri.go:89] found id: ""
	I0420 01:28:43.272009  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.272023  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:43.272035  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:43.272050  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:43.331606  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:43.331641  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:43.348411  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:43.348437  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 01:28:40.682973  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:43.182655  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:42.540341  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:45.039729  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:43.393377  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:45.893276  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	W0420 01:28:43.428628  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:43.428654  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:43.428675  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:43.511471  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:43.511506  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:46.056166  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:46.071677  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:46.071744  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:46.110710  142411 cri.go:89] found id: ""
	I0420 01:28:46.110740  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.110753  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:46.110761  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:46.110825  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:46.170680  142411 cri.go:89] found id: ""
	I0420 01:28:46.170712  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.170724  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:46.170731  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:46.170794  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:46.216387  142411 cri.go:89] found id: ""
	I0420 01:28:46.216413  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.216421  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:46.216429  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:46.216485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:46.258641  142411 cri.go:89] found id: ""
	I0420 01:28:46.258674  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.258685  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:46.258694  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:46.258755  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:46.296359  142411 cri.go:89] found id: ""
	I0420 01:28:46.296395  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.296407  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:46.296416  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:46.296480  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:46.335194  142411 cri.go:89] found id: ""
	I0420 01:28:46.335223  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.335238  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:46.335247  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:46.335300  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:46.373748  142411 cri.go:89] found id: ""
	I0420 01:28:46.373777  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.373789  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:46.373796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:46.373860  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:46.416960  142411 cri.go:89] found id: ""
	I0420 01:28:46.416987  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.416995  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:46.417005  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:46.417017  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:46.497542  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:46.497582  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:46.548086  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:46.548136  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:46.607354  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:46.607390  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:46.624379  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:46.624415  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:46.707425  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:45.682511  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.682752  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.046102  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:49.540014  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.895805  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:50.393001  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:49.208459  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:49.223081  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:49.223146  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:49.258688  142411 cri.go:89] found id: ""
	I0420 01:28:49.258718  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.258728  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:49.258734  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:49.258791  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:49.296817  142411 cri.go:89] found id: ""
	I0420 01:28:49.296859  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.296870  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:49.296878  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:49.296941  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:49.337821  142411 cri.go:89] found id: ""
	I0420 01:28:49.337853  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.337863  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:49.337870  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:49.337940  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:49.381360  142411 cri.go:89] found id: ""
	I0420 01:28:49.381384  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.381392  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:49.381397  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:49.381463  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:49.420099  142411 cri.go:89] found id: ""
	I0420 01:28:49.420143  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.420154  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:49.420162  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:49.420223  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:49.459810  142411 cri.go:89] found id: ""
	I0420 01:28:49.459843  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.459850  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:49.459859  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:49.459911  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:49.499776  142411 cri.go:89] found id: ""
	I0420 01:28:49.499808  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.499820  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:49.499828  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:49.499894  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:49.536115  142411 cri.go:89] found id: ""
	I0420 01:28:49.536147  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.536158  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:49.536169  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:49.536190  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:49.594665  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:49.594701  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:49.611896  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:49.611929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:49.689667  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:49.689685  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:49.689697  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:49.769061  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:49.769106  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:52.319299  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:52.336861  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:52.336934  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:52.380690  142411 cri.go:89] found id: ""
	I0420 01:28:52.380717  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.380725  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:52.380731  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:52.380781  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:52.429798  142411 cri.go:89] found id: ""
	I0420 01:28:52.429831  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.429843  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:52.429851  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:52.429915  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:52.474087  142411 cri.go:89] found id: ""
	I0420 01:28:52.474120  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.474130  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:52.474139  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:52.474204  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:52.514739  142411 cri.go:89] found id: ""
	I0420 01:28:52.514776  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.514789  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:52.514796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:52.514852  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:52.562100  142411 cri.go:89] found id: ""
	I0420 01:28:52.562195  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.562228  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:52.562236  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:52.562324  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:52.623266  142411 cri.go:89] found id: ""
	I0420 01:28:52.623301  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.623313  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:52.623321  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:52.623386  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:52.667788  142411 cri.go:89] found id: ""
	I0420 01:28:52.667818  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.667828  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:52.667838  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:52.667902  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:52.724607  142411 cri.go:89] found id: ""
	I0420 01:28:52.724636  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.724645  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:52.724654  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:52.724666  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:52.774798  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:52.774836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:52.833949  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:52.833989  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:52.851757  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:52.851787  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:52.939092  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:52.939119  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:52.939136  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:49.684112  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:52.182596  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:51.540918  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:54.039528  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:52.393913  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:54.892043  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:55.525807  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:55.540481  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:55.540557  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:55.584415  142411 cri.go:89] found id: ""
	I0420 01:28:55.584447  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.584458  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:55.584466  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:55.584538  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:55.623920  142411 cri.go:89] found id: ""
	I0420 01:28:55.623955  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.623965  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:55.623973  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:55.624037  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:55.667768  142411 cri.go:89] found id: ""
	I0420 01:28:55.667802  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.667810  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:55.667816  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:55.667889  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:55.708466  142411 cri.go:89] found id: ""
	I0420 01:28:55.708502  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.708513  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:55.708520  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:55.708600  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:55.748797  142411 cri.go:89] found id: ""
	I0420 01:28:55.748838  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.748849  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:55.748857  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:55.748919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:55.791714  142411 cri.go:89] found id: ""
	I0420 01:28:55.791743  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.791752  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:55.791761  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:55.791832  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:55.833836  142411 cri.go:89] found id: ""
	I0420 01:28:55.833862  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.833872  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:55.833879  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:55.833942  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:55.877425  142411 cri.go:89] found id: ""
	I0420 01:28:55.877462  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.877472  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:55.877484  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:55.877501  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:55.933237  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:55.933280  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:55.949507  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:55.949534  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:56.025596  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:56.025624  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:56.025641  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:56.105403  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:56.105439  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:54.683664  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.684401  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.040380  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.040834  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:00.040878  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.893067  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.894882  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.653368  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:58.669367  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:58.669429  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:58.712457  142411 cri.go:89] found id: ""
	I0420 01:28:58.712490  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.712501  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:58.712508  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:58.712574  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:58.750246  142411 cri.go:89] found id: ""
	I0420 01:28:58.750273  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.750281  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:58.750287  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:58.750351  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:58.793486  142411 cri.go:89] found id: ""
	I0420 01:28:58.793514  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.793522  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:58.793529  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:58.793595  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:58.839413  142411 cri.go:89] found id: ""
	I0420 01:28:58.839448  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.839461  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:58.839469  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:58.839537  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:58.881385  142411 cri.go:89] found id: ""
	I0420 01:28:58.881418  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.881430  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:58.881438  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:58.881509  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:58.923900  142411 cri.go:89] found id: ""
	I0420 01:28:58.923945  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.923965  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:58.923975  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:58.924038  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:58.962795  142411 cri.go:89] found id: ""
	I0420 01:28:58.962836  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.962848  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:58.962856  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:58.962919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:59.006309  142411 cri.go:89] found id: ""
	I0420 01:28:59.006341  142411 logs.go:276] 0 containers: []
	W0420 01:28:59.006350  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:59.006360  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:59.006372  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:59.062778  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:59.062819  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:59.078600  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:59.078630  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:59.159340  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:59.159361  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:59.159376  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:59.247257  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:59.247307  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:01.792687  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:01.808507  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:01.808588  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:01.851642  142411 cri.go:89] found id: ""
	I0420 01:29:01.851680  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.851691  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:01.851699  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:01.851765  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:01.891516  142411 cri.go:89] found id: ""
	I0420 01:29:01.891549  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.891560  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:01.891568  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:01.891640  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:01.934353  142411 cri.go:89] found id: ""
	I0420 01:29:01.934390  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.934402  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:01.934410  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:01.934479  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:01.972552  142411 cri.go:89] found id: ""
	I0420 01:29:01.972587  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.972599  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:01.972607  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:01.972711  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:02.012316  142411 cri.go:89] found id: ""
	I0420 01:29:02.012348  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.012360  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:02.012368  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:02.012423  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:02.056951  142411 cri.go:89] found id: ""
	I0420 01:29:02.056984  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.056994  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:02.057001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:02.057164  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:02.104061  142411 cri.go:89] found id: ""
	I0420 01:29:02.104091  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.104102  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:02.104110  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:02.104163  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:02.144085  142411 cri.go:89] found id: ""
	I0420 01:29:02.144114  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.144125  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:02.144137  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:02.144160  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:02.216560  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:02.216585  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:02.216598  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:02.307178  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:02.307222  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:02.349769  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:02.349798  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:02.401141  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:02.401176  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:59.185384  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:01.684462  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:03.685188  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:02.041060  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:04.540616  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:01.393943  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:03.894095  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:04.917513  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:04.934187  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:04.934266  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:04.970258  142411 cri.go:89] found id: ""
	I0420 01:29:04.970289  142411 logs.go:276] 0 containers: []
	W0420 01:29:04.970298  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:04.970304  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:04.970359  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:05.012853  142411 cri.go:89] found id: ""
	I0420 01:29:05.012883  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.012893  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:05.012899  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:05.012960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:05.054793  142411 cri.go:89] found id: ""
	I0420 01:29:05.054822  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.054833  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:05.054842  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:05.054910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:05.094637  142411 cri.go:89] found id: ""
	I0420 01:29:05.094674  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.094684  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:05.094701  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:05.094770  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:05.134874  142411 cri.go:89] found id: ""
	I0420 01:29:05.134903  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.134912  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:05.134918  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:05.134973  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:05.175637  142411 cri.go:89] found id: ""
	I0420 01:29:05.175668  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.175679  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:05.175687  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:05.175752  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:05.217809  142411 cri.go:89] found id: ""
	I0420 01:29:05.217847  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.217860  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:05.217867  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:05.217933  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:05.266884  142411 cri.go:89] found id: ""
	I0420 01:29:05.266917  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.266930  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:05.266941  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:05.266958  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:05.323765  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:05.323818  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:05.338524  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:05.338553  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:05.419860  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:05.419889  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:05.419906  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:05.506268  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:05.506311  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:08.055690  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:08.072692  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:08.072758  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:08.116247  142411 cri.go:89] found id: ""
	I0420 01:29:08.116287  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.116296  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:08.116304  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:08.116369  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:08.163152  142411 cri.go:89] found id: ""
	I0420 01:29:08.163177  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.163185  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:08.163190  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:08.163246  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:08.207330  142411 cri.go:89] found id: ""
	I0420 01:29:08.207357  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.207365  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:08.207371  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:08.207422  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:08.249833  142411 cri.go:89] found id: ""
	I0420 01:29:08.249864  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.249873  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:08.249879  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:08.249941  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:08.290834  142411 cri.go:89] found id: ""
	I0420 01:29:08.290867  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.290876  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:08.290883  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:08.290957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:08.333767  142411 cri.go:89] found id: ""
	I0420 01:29:08.333799  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.333809  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:08.333816  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:08.333888  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:08.381431  142411 cri.go:89] found id: ""
	I0420 01:29:08.381459  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.381468  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:08.381474  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:08.381532  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:06.183719  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.184829  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:06.544179  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:09.039956  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:06.394434  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.893184  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:10.897462  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.423702  142411 cri.go:89] found id: ""
	I0420 01:29:08.423727  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.423739  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:08.423751  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:08.423767  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:08.468422  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:08.468460  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:08.524091  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:08.524125  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:08.540294  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:08.540323  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:08.622439  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:08.622472  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:08.622488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:11.208472  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:11.225412  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:11.225479  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:11.273723  142411 cri.go:89] found id: ""
	I0420 01:29:11.273755  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.273767  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:11.273775  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:11.273840  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:11.316083  142411 cri.go:89] found id: ""
	I0420 01:29:11.316118  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.316130  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:11.316137  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:11.316203  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:11.355632  142411 cri.go:89] found id: ""
	I0420 01:29:11.355659  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.355668  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:11.355674  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:11.355734  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:11.397277  142411 cri.go:89] found id: ""
	I0420 01:29:11.397305  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.397327  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:11.397335  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:11.397399  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:11.439333  142411 cri.go:89] found id: ""
	I0420 01:29:11.439357  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.439366  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:11.439372  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:11.439433  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:11.477044  142411 cri.go:89] found id: ""
	I0420 01:29:11.477072  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.477079  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:11.477086  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:11.477142  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:11.516150  142411 cri.go:89] found id: ""
	I0420 01:29:11.516184  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.516196  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:11.516204  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:11.516274  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:11.557272  142411 cri.go:89] found id: ""
	I0420 01:29:11.557303  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.557331  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:11.557344  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:11.557366  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:11.652272  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:11.652319  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:11.700469  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:11.700504  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:11.756674  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:11.756711  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:11.772377  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:11.772407  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:11.851387  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:10.682669  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:12.684335  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:11.041282  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:13.541986  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:13.393346  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:15.394909  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:14.352257  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:14.367635  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:14.367714  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:14.408757  142411 cri.go:89] found id: ""
	I0420 01:29:14.408779  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.408788  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:14.408794  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:14.408843  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:14.455123  142411 cri.go:89] found id: ""
	I0420 01:29:14.455150  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.455159  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:14.455165  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:14.455239  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:14.499546  142411 cri.go:89] found id: ""
	I0420 01:29:14.499573  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.499581  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:14.499587  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:14.499635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:14.541811  142411 cri.go:89] found id: ""
	I0420 01:29:14.541841  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.541851  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:14.541859  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:14.541923  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:14.586965  142411 cri.go:89] found id: ""
	I0420 01:29:14.586990  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.587001  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:14.587008  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:14.587071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:14.625251  142411 cri.go:89] found id: ""
	I0420 01:29:14.625279  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.625288  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:14.625294  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:14.625377  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:14.665038  142411 cri.go:89] found id: ""
	I0420 01:29:14.665067  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.665079  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:14.665086  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:14.665157  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:14.706931  142411 cri.go:89] found id: ""
	I0420 01:29:14.706964  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.706978  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:14.706992  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:14.707044  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:14.761681  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:14.761717  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:14.776324  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:14.776350  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:14.856707  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:14.856727  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:14.856738  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:14.944019  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:14.944064  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:17.489112  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:17.507594  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:17.507660  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:17.556091  142411 cri.go:89] found id: ""
	I0420 01:29:17.556122  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.556132  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:17.556140  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:17.556205  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:17.600016  142411 cri.go:89] found id: ""
	I0420 01:29:17.600072  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.600086  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:17.600107  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:17.600171  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:17.643074  142411 cri.go:89] found id: ""
	I0420 01:29:17.643106  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.643118  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:17.643125  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:17.643190  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:17.684798  142411 cri.go:89] found id: ""
	I0420 01:29:17.684827  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.684838  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:17.684845  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:17.684910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:17.725451  142411 cri.go:89] found id: ""
	I0420 01:29:17.725481  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.725494  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:17.725503  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:17.725575  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:17.765918  142411 cri.go:89] found id: ""
	I0420 01:29:17.765944  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.765952  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:17.765959  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:17.766023  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:17.806011  142411 cri.go:89] found id: ""
	I0420 01:29:17.806038  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.806049  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:17.806056  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:17.806122  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:17.848409  142411 cri.go:89] found id: ""
	I0420 01:29:17.848441  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.848453  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:17.848465  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:17.848488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:17.903854  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:17.903900  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:17.919156  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:17.919191  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:18.008073  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:18.008115  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:18.008133  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:18.095887  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:18.095929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:14.687917  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:17.182326  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:16.039159  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:18.040487  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.540830  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:17.893270  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.392563  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.646919  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:20.664559  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:20.664635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:20.714440  142411 cri.go:89] found id: ""
	I0420 01:29:20.714472  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.714481  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:20.714487  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:20.714543  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:20.755249  142411 cri.go:89] found id: ""
	I0420 01:29:20.755276  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.755287  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:20.755294  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:20.755355  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:20.795744  142411 cri.go:89] found id: ""
	I0420 01:29:20.795777  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.795786  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:20.795797  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:20.795864  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:20.838083  142411 cri.go:89] found id: ""
	I0420 01:29:20.838111  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.838120  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:20.838128  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:20.838193  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:20.880198  142411 cri.go:89] found id: ""
	I0420 01:29:20.880227  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.880238  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:20.880245  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:20.880312  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:20.920496  142411 cri.go:89] found id: ""
	I0420 01:29:20.920522  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.920530  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:20.920536  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:20.920618  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:20.960137  142411 cri.go:89] found id: ""
	I0420 01:29:20.960170  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.960180  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:20.960186  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:20.960251  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:20.999583  142411 cri.go:89] found id: ""
	I0420 01:29:20.999624  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.999637  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:20.999649  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:20.999665  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:21.077439  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:21.077476  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:21.121104  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:21.121148  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:21.173871  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:21.173909  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:21.189767  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:21.189795  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:21.264715  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:19.682554  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:21.682995  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:22.543452  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:25.040875  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:22.393626  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:24.894279  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:23.765605  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:23.782250  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:23.782334  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:23.827248  142411 cri.go:89] found id: ""
	I0420 01:29:23.827277  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.827285  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:23.827291  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:23.827349  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:23.867610  142411 cri.go:89] found id: ""
	I0420 01:29:23.867636  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.867645  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:23.867651  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:23.867712  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:23.906244  142411 cri.go:89] found id: ""
	I0420 01:29:23.906271  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.906278  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:23.906283  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:23.906343  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:23.952256  142411 cri.go:89] found id: ""
	I0420 01:29:23.952288  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.952306  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:23.952314  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:23.952378  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:23.992843  142411 cri.go:89] found id: ""
	I0420 01:29:23.992879  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.992888  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:23.992896  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:23.992959  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:24.036460  142411 cri.go:89] found id: ""
	I0420 01:29:24.036493  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.036504  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:24.036512  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:24.036582  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:24.075910  142411 cri.go:89] found id: ""
	I0420 01:29:24.075944  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.075955  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:24.075962  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:24.076033  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:24.122638  142411 cri.go:89] found id: ""
	I0420 01:29:24.122676  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.122688  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:24.122698  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:24.122717  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:24.138022  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:24.138061  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:24.220977  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:24.220998  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:24.221012  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:24.302928  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:24.302972  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:24.351237  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:24.351277  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:26.910354  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:26.926815  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:26.926900  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:26.966123  142411 cri.go:89] found id: ""
	I0420 01:29:26.966155  142411 logs.go:276] 0 containers: []
	W0420 01:29:26.966165  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:26.966172  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:26.966246  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:27.011679  142411 cri.go:89] found id: ""
	I0420 01:29:27.011714  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.011727  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:27.011735  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:27.011806  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:27.052116  142411 cri.go:89] found id: ""
	I0420 01:29:27.052141  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.052148  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:27.052155  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:27.052202  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:27.090375  142411 cri.go:89] found id: ""
	I0420 01:29:27.090404  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.090413  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:27.090419  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:27.090476  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:27.131911  142411 cri.go:89] found id: ""
	I0420 01:29:27.131946  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.131957  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:27.131965  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:27.132033  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:27.176663  142411 cri.go:89] found id: ""
	I0420 01:29:27.176696  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.176714  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:27.176723  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:27.176788  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:27.217806  142411 cri.go:89] found id: ""
	I0420 01:29:27.217836  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.217846  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:27.217853  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:27.217917  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:27.253956  142411 cri.go:89] found id: ""
	I0420 01:29:27.253981  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.253989  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:27.253998  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:27.254014  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:27.298225  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:27.298264  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:27.351213  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:27.351259  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:27.366352  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:27.366388  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:27.466716  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:27.466742  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:27.466770  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:24.184743  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:26.681862  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:28.683193  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:27.042377  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:29.539413  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:27.395660  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:29.893947  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:30.050528  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:30.065697  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:30.065769  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:30.104643  142411 cri.go:89] found id: ""
	I0420 01:29:30.104675  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.104686  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:30.104694  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:30.104753  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:30.143864  142411 cri.go:89] found id: ""
	I0420 01:29:30.143892  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.143903  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:30.143910  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:30.143976  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:30.187925  142411 cri.go:89] found id: ""
	I0420 01:29:30.187954  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.187964  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:30.187972  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:30.188035  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:30.227968  142411 cri.go:89] found id: ""
	I0420 01:29:30.227995  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.228003  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:30.228009  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:30.228059  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:30.269550  142411 cri.go:89] found id: ""
	I0420 01:29:30.269584  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.269596  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:30.269604  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:30.269672  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:30.311777  142411 cri.go:89] found id: ""
	I0420 01:29:30.311810  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.311819  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:30.311827  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:30.311878  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:30.353569  142411 cri.go:89] found id: ""
	I0420 01:29:30.353601  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.353610  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:30.353617  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:30.353683  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:30.395003  142411 cri.go:89] found id: ""
	I0420 01:29:30.395032  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.395043  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:30.395054  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:30.395066  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:30.455495  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:30.455536  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:30.473749  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:30.473778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:30.555370  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:30.555397  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:30.555417  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:30.637079  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:30.637124  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:33.188917  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:33.203689  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:33.203757  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:33.246796  142411 cri.go:89] found id: ""
	I0420 01:29:33.246828  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.246840  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:33.246848  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:33.246911  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:33.284667  142411 cri.go:89] found id: ""
	I0420 01:29:33.284700  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.284712  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:33.284720  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:33.284782  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:33.328653  142411 cri.go:89] found id: ""
	I0420 01:29:33.328688  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.328701  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:33.328709  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:33.328777  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:33.369081  142411 cri.go:89] found id: ""
	I0420 01:29:33.369107  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.369121  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:33.369130  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:33.369180  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:30.684861  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:32.689885  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:31.547492  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:34.040445  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:31.894902  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:34.392071  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:33.414282  142411 cri.go:89] found id: ""
	I0420 01:29:33.414313  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.414322  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:33.414327  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:33.414411  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:33.457086  142411 cri.go:89] found id: ""
	I0420 01:29:33.457112  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.457119  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:33.457126  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:33.457176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:33.498686  142411 cri.go:89] found id: ""
	I0420 01:29:33.498716  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.498729  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:33.498738  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:33.498808  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:33.538872  142411 cri.go:89] found id: ""
	I0420 01:29:33.538907  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.538920  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:33.538932  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:33.538959  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:33.592586  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:33.592631  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:33.609200  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:33.609226  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:33.690795  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:33.690820  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:33.690836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:33.776092  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:33.776131  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:36.331256  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:36.348813  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:36.348892  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:36.397503  142411 cri.go:89] found id: ""
	I0420 01:29:36.397527  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.397534  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:36.397540  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:36.397603  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:36.439638  142411 cri.go:89] found id: ""
	I0420 01:29:36.439667  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.439675  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:36.439685  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:36.439761  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:36.477155  142411 cri.go:89] found id: ""
	I0420 01:29:36.477182  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.477194  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:36.477201  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:36.477259  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:36.533326  142411 cri.go:89] found id: ""
	I0420 01:29:36.533360  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.533373  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:36.533381  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:36.533446  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:36.573056  142411 cri.go:89] found id: ""
	I0420 01:29:36.573093  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.573107  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:36.573114  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:36.573177  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:36.611901  142411 cri.go:89] found id: ""
	I0420 01:29:36.611937  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.611949  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:36.611957  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:36.612017  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:36.656780  142411 cri.go:89] found id: ""
	I0420 01:29:36.656810  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.656823  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:36.656830  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:36.656899  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:36.699872  142411 cri.go:89] found id: ""
	I0420 01:29:36.699906  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.699916  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:36.699928  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:36.699943  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:36.758859  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:36.758895  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:36.775108  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:36.775145  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:36.858001  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:36.858027  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:36.858044  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:36.936114  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:36.936154  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:35.182481  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:37.182529  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:36.041125  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:38.043465  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:40.540023  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:36.395316  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:38.894062  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:40.894416  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:39.487167  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:39.502929  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:39.502995  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:39.547338  142411 cri.go:89] found id: ""
	I0420 01:29:39.547363  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.547371  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:39.547377  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:39.547430  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:39.608684  142411 cri.go:89] found id: ""
	I0420 01:29:39.608714  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.608722  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:39.608728  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:39.608793  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:39.679248  142411 cri.go:89] found id: ""
	I0420 01:29:39.679281  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.679292  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:39.679300  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:39.679361  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:39.725226  142411 cri.go:89] found id: ""
	I0420 01:29:39.725257  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.725270  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:39.725278  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:39.725363  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:39.767653  142411 cri.go:89] found id: ""
	I0420 01:29:39.767681  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.767690  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:39.767697  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:39.767760  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:39.807848  142411 cri.go:89] found id: ""
	I0420 01:29:39.807885  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.807893  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:39.807900  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:39.807968  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:39.847171  142411 cri.go:89] found id: ""
	I0420 01:29:39.847201  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.847212  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:39.847219  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:39.847284  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:39.884959  142411 cri.go:89] found id: ""
	I0420 01:29:39.884996  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.885007  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:39.885034  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:39.885050  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:39.959245  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:39.959269  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:39.959286  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:40.041394  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:40.041436  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:40.083125  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:40.083171  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:40.139902  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:40.139957  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:42.657038  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:42.673303  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:42.673407  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:42.717081  142411 cri.go:89] found id: ""
	I0420 01:29:42.717106  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.717114  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:42.717120  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:42.717170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:42.762322  142411 cri.go:89] found id: ""
	I0420 01:29:42.762357  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.762367  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:42.762375  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:42.762442  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:42.805059  142411 cri.go:89] found id: ""
	I0420 01:29:42.805112  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.805122  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:42.805131  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:42.805201  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:42.848539  142411 cri.go:89] found id: ""
	I0420 01:29:42.848568  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.848580  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:42.848587  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:42.848679  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:42.887915  142411 cri.go:89] found id: ""
	I0420 01:29:42.887949  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.887960  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:42.887967  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:42.888032  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:42.938832  142411 cri.go:89] found id: ""
	I0420 01:29:42.938867  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.938878  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:42.938888  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:42.938957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:42.982376  142411 cri.go:89] found id: ""
	I0420 01:29:42.982402  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.982409  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:42.982415  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:42.982477  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:43.023264  142411 cri.go:89] found id: ""
	I0420 01:29:43.023293  142411 logs.go:276] 0 containers: []
	W0420 01:29:43.023301  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:43.023313  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:43.023326  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:43.079673  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:43.079714  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:43.094753  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:43.094786  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:43.180113  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:43.180149  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:43.180177  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:43.259830  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:43.259872  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:39.182568  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:41.186805  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:43.683131  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:42.540687  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.039857  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:43.392948  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.394081  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.802515  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:45.816908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:45.816965  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:45.861091  142411 cri.go:89] found id: ""
	I0420 01:29:45.861123  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.861132  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:45.861138  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:45.861224  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:45.901677  142411 cri.go:89] found id: ""
	I0420 01:29:45.901702  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.901710  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:45.901716  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:45.901767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:45.938301  142411 cri.go:89] found id: ""
	I0420 01:29:45.938325  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.938334  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:45.938339  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:45.938393  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:45.978432  142411 cri.go:89] found id: ""
	I0420 01:29:45.978460  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.978473  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:45.978479  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:45.978537  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:46.019410  142411 cri.go:89] found id: ""
	I0420 01:29:46.019446  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.019455  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:46.019461  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:46.019524  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:46.071002  142411 cri.go:89] found id: ""
	I0420 01:29:46.071032  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.071041  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:46.071052  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:46.071124  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:46.110362  142411 cri.go:89] found id: ""
	I0420 01:29:46.110391  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.110402  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:46.110409  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:46.110477  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:46.152276  142411 cri.go:89] found id: ""
	I0420 01:29:46.152311  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.152322  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:46.152334  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:46.152351  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:46.205121  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:46.205159  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:46.221808  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:46.221842  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:46.300394  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:46.300418  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:46.300434  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:46.391961  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:46.392002  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:45.684038  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:48.176081  141927 pod_ready.go:81] duration metric: took 4m0.00056563s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" ...
	E0420 01:29:48.176112  141927 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:29:48.176130  141927 pod_ready.go:38] duration metric: took 4m7.024291569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:29:48.176166  141927 kubeadm.go:591] duration metric: took 4m16.819079549s to restartPrimaryControlPlane
	W0420 01:29:48.176256  141927 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:29:48.176291  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:29:47.040255  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:49.043956  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:47.893875  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:49.894291  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:48.945086  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:48.961414  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:48.961491  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:49.010230  142411 cri.go:89] found id: ""
	I0420 01:29:49.010285  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.010299  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:49.010309  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:49.010385  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:49.054455  142411 cri.go:89] found id: ""
	I0420 01:29:49.054481  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.054491  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:49.054499  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:49.054566  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:49.094536  142411 cri.go:89] found id: ""
	I0420 01:29:49.094562  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.094572  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:49.094580  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:49.094740  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:49.134004  142411 cri.go:89] found id: ""
	I0420 01:29:49.134035  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.134046  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:49.134054  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:49.134118  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:49.173697  142411 cri.go:89] found id: ""
	I0420 01:29:49.173728  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.173741  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:49.173750  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:49.173817  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:49.215655  142411 cri.go:89] found id: ""
	I0420 01:29:49.215681  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.215689  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:49.215695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:49.215745  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:49.258282  142411 cri.go:89] found id: ""
	I0420 01:29:49.258312  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.258324  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:49.258332  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:49.258394  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:49.298565  142411 cri.go:89] found id: ""
	I0420 01:29:49.298597  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.298608  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:49.298620  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:49.298638  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:49.378833  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:49.378862  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:49.378880  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:49.467477  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:49.467517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:49.521747  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:49.521788  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:49.583386  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:49.583436  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:52.102969  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:52.122971  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:52.123053  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:52.166166  142411 cri.go:89] found id: ""
	I0420 01:29:52.166199  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.166210  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:52.166219  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:52.166287  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:52.206790  142411 cri.go:89] found id: ""
	I0420 01:29:52.206817  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.206824  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:52.206830  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:52.206889  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:52.249879  142411 cri.go:89] found id: ""
	I0420 01:29:52.249911  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.249921  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:52.249931  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:52.249997  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:52.293953  142411 cri.go:89] found id: ""
	I0420 01:29:52.293997  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.294009  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:52.294018  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:52.294095  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:52.339447  142411 cri.go:89] found id: ""
	I0420 01:29:52.339478  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.339490  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:52.339497  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:52.339558  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:52.378383  142411 cri.go:89] found id: ""
	I0420 01:29:52.378416  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.378428  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:52.378435  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:52.378488  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:52.423079  142411 cri.go:89] found id: ""
	I0420 01:29:52.423121  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.423130  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:52.423137  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:52.423205  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:52.459525  142411 cri.go:89] found id: ""
	I0420 01:29:52.459559  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.459572  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:52.459594  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:52.459610  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:52.567141  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:52.567186  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:52.618194  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:52.618235  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:52.681921  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:52.681959  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:52.699065  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:52.699108  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:52.776829  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:51.540922  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:54.043224  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:52.397218  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:54.895147  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:55.277933  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:55.293380  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:55.293455  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:55.337443  142411 cri.go:89] found id: ""
	I0420 01:29:55.337475  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.337483  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:55.337491  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:55.337557  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:55.375911  142411 cri.go:89] found id: ""
	I0420 01:29:55.375942  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.375951  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:55.375957  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:55.376022  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:55.418545  142411 cri.go:89] found id: ""
	I0420 01:29:55.418569  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.418577  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:55.418583  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:55.418635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:55.459343  142411 cri.go:89] found id: ""
	I0420 01:29:55.459378  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.459390  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:55.459397  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:55.459452  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:55.503851  142411 cri.go:89] found id: ""
	I0420 01:29:55.503878  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.503887  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:55.503895  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:55.503959  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:55.542533  142411 cri.go:89] found id: ""
	I0420 01:29:55.542556  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.542562  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:55.542568  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:55.542623  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:55.582205  142411 cri.go:89] found id: ""
	I0420 01:29:55.582236  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.582246  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:55.582252  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:55.582314  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:55.624727  142411 cri.go:89] found id: ""
	I0420 01:29:55.624757  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.624769  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:55.624781  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:55.624803  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:55.675403  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:55.675438  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:55.691492  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:55.691516  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:55.772283  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:55.772313  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:55.772330  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:55.859440  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:55.859477  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:56.543221  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:59.041874  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:57.393723  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:59.894390  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:58.406009  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:58.422305  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:58.422382  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:58.468206  142411 cri.go:89] found id: ""
	I0420 01:29:58.468303  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.468321  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:58.468329  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:58.468402  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:58.513981  142411 cri.go:89] found id: ""
	I0420 01:29:58.514018  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.514027  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:58.514041  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:58.514105  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:58.559967  142411 cri.go:89] found id: ""
	I0420 01:29:58.560000  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.560011  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:58.560019  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:58.560084  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:58.600710  142411 cri.go:89] found id: ""
	I0420 01:29:58.600744  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.600763  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:58.600771  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:58.600834  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:58.645995  142411 cri.go:89] found id: ""
	I0420 01:29:58.646022  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.646030  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:58.646036  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:58.646097  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:58.684930  142411 cri.go:89] found id: ""
	I0420 01:29:58.684957  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.684965  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:58.684972  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:58.685022  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:58.727225  142411 cri.go:89] found id: ""
	I0420 01:29:58.727251  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.727259  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:58.727265  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:58.727319  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:58.765244  142411 cri.go:89] found id: ""
	I0420 01:29:58.765282  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.765293  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:58.765303  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:58.765330  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:58.817791  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:58.817822  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:58.832882  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:58.832926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:58.919297  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:58.919325  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:58.919342  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:59.002590  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:59.002637  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:01.551854  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:01.568974  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:01.569054  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:01.609165  142411 cri.go:89] found id: ""
	I0420 01:30:01.609191  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.609200  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:01.609206  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:01.609272  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:01.653349  142411 cri.go:89] found id: ""
	I0420 01:30:01.653383  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.653396  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:01.653405  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:01.653482  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:01.698961  142411 cri.go:89] found id: ""
	I0420 01:30:01.698991  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.699002  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:01.699009  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:01.699063  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:01.739230  142411 cri.go:89] found id: ""
	I0420 01:30:01.739271  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.739283  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:01.739292  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:01.739376  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:01.781839  142411 cri.go:89] found id: ""
	I0420 01:30:01.781873  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.781885  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:01.781893  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:01.781960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:01.821212  142411 cri.go:89] found id: ""
	I0420 01:30:01.821241  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.821252  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:01.821259  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:01.821339  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:01.859959  142411 cri.go:89] found id: ""
	I0420 01:30:01.859984  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.859993  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:01.859999  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:01.860060  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:01.898832  142411 cri.go:89] found id: ""
	I0420 01:30:01.898858  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.898865  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:01.898875  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:01.898886  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:01.943065  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:01.943156  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:01.995618  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:01.995654  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:02.010489  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:02.010517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:02.090181  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:02.090222  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:02.090238  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:01.541135  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.041977  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:02.394456  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.894450  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.671376  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:04.687535  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:04.687629  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:04.728732  142411 cri.go:89] found id: ""
	I0420 01:30:04.728765  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.728778  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:04.728786  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:04.728854  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:04.768537  142411 cri.go:89] found id: ""
	I0420 01:30:04.768583  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.768602  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:04.768610  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:04.768676  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:04.811714  142411 cri.go:89] found id: ""
	I0420 01:30:04.811741  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.811750  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:04.811756  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:04.811816  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:04.852324  142411 cri.go:89] found id: ""
	I0420 01:30:04.852360  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.852371  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:04.852379  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:04.852452  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:04.891657  142411 cri.go:89] found id: ""
	I0420 01:30:04.891688  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.891700  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:04.891708  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:04.891774  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:04.933192  142411 cri.go:89] found id: ""
	I0420 01:30:04.933222  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.933230  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:04.933236  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:04.933291  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:04.972796  142411 cri.go:89] found id: ""
	I0420 01:30:04.972819  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.972828  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:04.972834  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:04.972888  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:05.014782  142411 cri.go:89] found id: ""
	I0420 01:30:05.014821  142411 logs.go:276] 0 containers: []
	W0420 01:30:05.014833  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:05.014846  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:05.014862  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:05.067438  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:05.067470  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:05.121336  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:05.121371  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:05.137495  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:05.137529  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:05.214132  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:05.214153  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:05.214170  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:07.796964  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:07.810856  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:07.810917  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:07.846993  142411 cri.go:89] found id: ""
	I0420 01:30:07.847024  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.847033  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:07.847040  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:07.847089  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:07.886422  142411 cri.go:89] found id: ""
	I0420 01:30:07.886452  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.886464  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:07.886474  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:07.886567  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:07.942200  142411 cri.go:89] found id: ""
	I0420 01:30:07.942230  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.942238  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:07.942245  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:07.942296  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:07.980179  142411 cri.go:89] found id: ""
	I0420 01:30:07.980215  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.980226  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:07.980235  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:07.980299  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:08.020097  142411 cri.go:89] found id: ""
	I0420 01:30:08.020130  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.020140  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:08.020145  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:08.020215  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:08.063793  142411 cri.go:89] found id: ""
	I0420 01:30:08.063837  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.063848  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:08.063857  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:08.063930  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:08.108674  142411 cri.go:89] found id: ""
	I0420 01:30:08.108705  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.108716  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:08.108724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:08.108798  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:08.147467  142411 cri.go:89] found id: ""
	I0420 01:30:08.147495  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.147503  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:08.147512  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:08.147525  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:08.239416  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:08.239466  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:08.294639  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:08.294669  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:08.349753  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:08.349795  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:08.368971  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:08.369003  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 01:30:06.540958  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:08.541701  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:06.898857  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:09.397590  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	W0420 01:30:08.449996  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:10.950318  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:10.964969  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:10.965032  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:11.006321  142411 cri.go:89] found id: ""
	I0420 01:30:11.006354  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.006365  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:11.006375  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:11.006437  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:11.047982  142411 cri.go:89] found id: ""
	I0420 01:30:11.048010  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.048019  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:11.048025  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:11.048073  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:11.089185  142411 cri.go:89] found id: ""
	I0420 01:30:11.089217  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.089226  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:11.089232  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:11.089287  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:11.131293  142411 cri.go:89] found id: ""
	I0420 01:30:11.131322  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.131335  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:11.131344  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:11.131398  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:11.170394  142411 cri.go:89] found id: ""
	I0420 01:30:11.170419  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.170427  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:11.170432  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:11.170485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:11.210580  142411 cri.go:89] found id: ""
	I0420 01:30:11.210619  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.210631  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:11.210640  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:11.210706  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:11.251938  142411 cri.go:89] found id: ""
	I0420 01:30:11.251977  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.251990  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:11.251998  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:11.252064  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:11.295999  142411 cri.go:89] found id: ""
	I0420 01:30:11.296033  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.296045  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:11.296057  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:11.296072  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:11.378564  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:11.378632  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:11.422836  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:11.422868  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:11.475893  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:11.475928  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:11.491524  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:11.491555  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:11.569066  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:11.041078  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:13.540339  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:15.541762  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:11.893724  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:14.394206  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:14.886464  142057 pod_ready.go:81] duration metric: took 4m0.00077804s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" ...
	E0420 01:30:14.886500  142057 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:30:14.886528  142057 pod_ready.go:38] duration metric: took 4m14.554070758s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:14.886572  142057 kubeadm.go:591] duration metric: took 4m22.173690393s to restartPrimaryControlPlane
	W0420 01:30:14.886657  142057 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:30:14.886691  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:30:14.070158  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:14.086000  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:14.086067  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:14.128864  142411 cri.go:89] found id: ""
	I0420 01:30:14.128894  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.128906  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:14.128914  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:14.128986  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:14.169447  142411 cri.go:89] found id: ""
	I0420 01:30:14.169482  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.169497  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:14.169506  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:14.169583  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:14.210007  142411 cri.go:89] found id: ""
	I0420 01:30:14.210043  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.210054  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:14.210062  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:14.210119  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:14.247652  142411 cri.go:89] found id: ""
	I0420 01:30:14.247685  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.247695  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:14.247703  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:14.247764  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:14.290788  142411 cri.go:89] found id: ""
	I0420 01:30:14.290820  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.290830  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:14.290847  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:14.290908  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:14.351514  142411 cri.go:89] found id: ""
	I0420 01:30:14.351548  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.351570  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:14.351581  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:14.351637  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:14.423481  142411 cri.go:89] found id: ""
	I0420 01:30:14.423520  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.423534  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:14.423543  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:14.423615  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:14.465597  142411 cri.go:89] found id: ""
	I0420 01:30:14.465622  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.465630  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:14.465639  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:14.465655  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:14.522669  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:14.522705  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:14.541258  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:14.541293  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:14.618657  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:14.618678  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:14.618691  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:14.702616  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:14.702658  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:17.256212  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:17.277171  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:17.277250  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:17.321548  142411 cri.go:89] found id: ""
	I0420 01:30:17.321582  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.321600  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:17.321607  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:17.321676  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:17.362856  142411 cri.go:89] found id: ""
	I0420 01:30:17.362883  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.362890  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:17.362896  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:17.362966  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:17.409494  142411 cri.go:89] found id: ""
	I0420 01:30:17.409525  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.409539  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:17.409548  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:17.409631  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:17.447759  142411 cri.go:89] found id: ""
	I0420 01:30:17.447801  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.447812  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:17.447819  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:17.447885  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:17.498416  142411 cri.go:89] found id: ""
	I0420 01:30:17.498444  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.498454  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:17.498460  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:17.498528  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:17.546025  142411 cri.go:89] found id: ""
	I0420 01:30:17.546055  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.546064  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:17.546072  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:17.546138  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:17.585797  142411 cri.go:89] found id: ""
	I0420 01:30:17.585829  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.585840  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:17.585848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:17.585919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:17.630850  142411 cri.go:89] found id: ""
	I0420 01:30:17.630886  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.630899  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:17.630911  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:17.630926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:17.689472  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:17.689510  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:17.705603  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:17.705642  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:17.794094  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:17.794137  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:17.794155  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:17.879397  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:17.879435  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:18.041437  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:20.044174  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:20.428142  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:20.444936  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:20.445018  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:20.487317  142411 cri.go:89] found id: ""
	I0420 01:30:20.487354  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.487365  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:20.487373  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:20.487443  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:20.537209  142411 cri.go:89] found id: ""
	I0420 01:30:20.537241  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.537254  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:20.537262  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:20.537348  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:20.584311  142411 cri.go:89] found id: ""
	I0420 01:30:20.584343  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.584352  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:20.584357  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:20.584413  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:20.631915  142411 cri.go:89] found id: ""
	I0420 01:30:20.631948  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.631959  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:20.631969  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:20.632040  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:20.679680  142411 cri.go:89] found id: ""
	I0420 01:30:20.679707  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.679716  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:20.679721  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:20.679770  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:20.724967  142411 cri.go:89] found id: ""
	I0420 01:30:20.725002  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.725013  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:20.725027  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:20.725091  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:20.772717  142411 cri.go:89] found id: ""
	I0420 01:30:20.772751  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.772762  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:20.772771  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:20.772837  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:20.812421  142411 cri.go:89] found id: ""
	I0420 01:30:20.812449  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.812460  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:20.812471  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:20.812485  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:20.870522  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:20.870554  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:20.886764  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:20.886793  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:20.963941  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:20.963964  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:20.963979  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:21.045738  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:21.045778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:20.850989  141927 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.674674204s)
	I0420 01:30:20.851082  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:20.868537  141927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:20.880284  141927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:20.891650  141927 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:20.891672  141927 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:20.891726  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0420 01:30:20.902443  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:20.902509  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:20.913476  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0420 01:30:20.923762  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:20.923836  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:20.934281  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0420 01:30:20.944194  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:20.944254  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:20.955506  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0420 01:30:20.968039  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:20.968107  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:20.978918  141927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:21.214688  141927 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:30:22.539778  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:24.543547  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:23.600037  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:23.616539  142411 kubeadm.go:591] duration metric: took 4m4.142686832s to restartPrimaryControlPlane
	W0420 01:30:23.616641  142411 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:30:23.616676  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:30:25.481285  142411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.864573977s)
	I0420 01:30:25.481385  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:25.500950  142411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:25.518624  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:25.532506  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:25.532531  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:25.532584  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:30:25.546634  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:25.546708  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:25.561379  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:30:25.575506  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:25.575627  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:25.590615  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:30:25.604855  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:25.604923  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:25.619717  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:30:25.634525  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:25.634607  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:25.649408  142411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:25.735636  142411 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:30:25.735697  142411 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:25.913199  142411 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:25.913347  142411 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:25.913483  142411 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:26.120240  142411 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:26.122066  142411 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:26.122169  142411 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:26.122279  142411 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:26.122395  142411 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:26.122499  142411 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:26.122623  142411 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:26.122715  142411 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:26.122806  142411 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:26.122898  142411 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:26.122999  142411 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:26.123113  142411 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:26.123173  142411 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:26.123244  142411 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:26.243908  142411 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:26.354349  142411 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:26.605778  142411 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:26.833914  142411 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:26.855348  142411 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:26.857029  142411 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:26.857250  142411 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:27.010707  142411 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:27.012314  142411 out.go:204]   - Booting up control plane ...
	I0420 01:30:27.012456  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:27.036284  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:27.049123  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:27.050561  142411 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:27.053222  142411 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:30:30.213456  141927 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:30:30.213557  141927 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:30.213687  141927 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:30.213826  141927 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:30.213915  141927 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:30.213978  141927 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:30.215501  141927 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:30.215594  141927 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:30.215667  141927 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:30.215802  141927 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:30.215886  141927 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:30.215960  141927 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:30.216018  141927 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:30.216097  141927 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:30.216156  141927 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:30.216258  141927 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:30.216350  141927 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:30.216385  141927 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:30.216447  141927 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:30.216517  141927 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:30.216589  141927 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:30:30.216653  141927 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:30.216743  141927 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:30.216832  141927 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:30.216933  141927 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:30.217019  141927 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:30.218228  141927 out.go:204]   - Booting up control plane ...
	I0420 01:30:30.218341  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:30.218446  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:30.218516  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:30.218615  141927 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:30.218703  141927 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:30.218753  141927 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:30.218904  141927 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:30:30.218975  141927 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:30:30.219027  141927 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001925972s
	I0420 01:30:30.219128  141927 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:30:30.219216  141927 kubeadm.go:309] [api-check] The API server is healthy after 5.502367015s
	I0420 01:30:30.219336  141927 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:30:30.219504  141927 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:30:30.219576  141927 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:30:30.219816  141927 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-907988 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:30:30.219880  141927 kubeadm.go:309] [bootstrap-token] Using token: ozlrl4.y5r3psi4bnl35gso
	I0420 01:30:30.221283  141927 out.go:204]   - Configuring RBAC rules ...
	I0420 01:30:30.221416  141927 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:30:30.221533  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:30:30.221728  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:30:30.221968  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:30:30.222146  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:30:30.222255  141927 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:30:30.222385  141927 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:30:30.222455  141927 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:30:30.222524  141927 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:30:30.222534  141927 kubeadm.go:309] 
	I0420 01:30:30.222614  141927 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:30:30.222628  141927 kubeadm.go:309] 
	I0420 01:30:30.222692  141927 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:30:30.222699  141927 kubeadm.go:309] 
	I0420 01:30:30.222723  141927 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:30:30.222772  141927 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:30:30.222815  141927 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:30:30.222821  141927 kubeadm.go:309] 
	I0420 01:30:30.222878  141927 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:30:30.222885  141927 kubeadm.go:309] 
	I0420 01:30:30.222923  141927 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:30:30.222929  141927 kubeadm.go:309] 
	I0420 01:30:30.222994  141927 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:30:30.223100  141927 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:30:30.223171  141927 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:30:30.223189  141927 kubeadm.go:309] 
	I0420 01:30:30.223281  141927 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:30:30.223346  141927 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:30:30.223354  141927 kubeadm.go:309] 
	I0420 01:30:30.223423  141927 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token ozlrl4.y5r3psi4bnl35gso \
	I0420 01:30:30.223527  141927 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:30:30.223552  141927 kubeadm.go:309] 	--control-plane 
	I0420 01:30:30.223559  141927 kubeadm.go:309] 
	I0420 01:30:30.223627  141927 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:30:30.223635  141927 kubeadm.go:309] 
	I0420 01:30:30.223704  141927 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token ozlrl4.y5r3psi4bnl35gso \
	I0420 01:30:30.223811  141927 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:30:30.223826  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:30:30.223833  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:30:30.225184  141927 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:30:27.041383  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:29.540967  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:30.226237  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:30:30.241388  141927 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:30:30.274356  141927 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:30:30.274469  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:30.274503  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-907988 minikube.k8s.io/updated_at=2024_04_20T01_30_30_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=default-k8s-diff-port-907988 minikube.k8s.io/primary=true
	I0420 01:30:30.319402  141927 ops.go:34] apiserver oom_adj: -16
	I0420 01:30:30.505362  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:31.006101  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:31.505679  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.005947  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.505747  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:33.005919  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:33.505449  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:34.006029  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.040710  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:34.541175  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:34.505846  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:35.006187  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:35.505618  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:36.005994  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:36.506217  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.006428  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.506359  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:38.006018  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:38.505454  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:39.006426  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.041157  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:39.542266  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:39.506227  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:40.005941  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:40.506123  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:41.006198  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:41.506244  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:42.006045  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:42.505458  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:43.006082  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:43.122481  141927 kubeadm.go:1107] duration metric: took 12.84807935s to wait for elevateKubeSystemPrivileges
	W0420 01:30:43.122525  141927 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:30:43.122535  141927 kubeadm.go:393] duration metric: took 5m11.83456536s to StartCluster
	I0420 01:30:43.122559  141927 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:30:43.122689  141927 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:30:43.124746  141927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:30:43.125059  141927 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:30:43.126572  141927 out.go:177] * Verifying Kubernetes components...
	I0420 01:30:43.125129  141927 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:30:43.125301  141927 config.go:182] Loaded profile config "default-k8s-diff-port-907988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:30:43.128187  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:30:43.128231  141927 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-907988"
	I0420 01:30:43.128240  141927 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-907988"
	I0420 01:30:43.128277  141927 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-907988"
	I0420 01:30:43.128278  141927 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-907988"
	W0420 01:30:43.128288  141927 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:30:43.128302  141927 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-907988"
	I0420 01:30:43.128352  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.128769  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.128795  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.128840  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.128800  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.128306  141927 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-907988"
	W0420 01:30:43.128994  141927 addons.go:243] addon metrics-server should already be in state true
	I0420 01:30:43.129026  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.129378  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.129401  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.148251  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41797
	I0420 01:30:43.148272  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39865
	I0420 01:30:43.148503  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33785
	I0420 01:30:43.148959  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.148985  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.149060  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.149605  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149626  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.149683  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149688  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149698  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.149706  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.150105  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150108  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150106  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150358  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.150703  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.150733  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.150760  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.150798  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.154242  141927 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-907988"
	W0420 01:30:43.154266  141927 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:30:43.154300  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.154673  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.154715  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.167283  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46477
	I0420 01:30:43.167925  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.168475  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.168496  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.168868  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.169094  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.171067  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45101
	I0420 01:30:43.171384  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.173102  141927 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:30:43.171760  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.172823  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I0420 01:30:43.174639  141927 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:30:43.174661  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:30:43.174681  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.174859  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.175307  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.175331  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.175460  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.175476  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.175799  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.175992  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.176361  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.176376  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.176686  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.178744  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.178848  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.180048  141927 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:30:43.179462  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.181257  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:30:43.181275  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:30:43.181289  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.181296  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.179641  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.182168  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.182437  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.182627  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.184562  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.184958  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.184985  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.185241  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.185430  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.185621  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.185771  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.195778  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I0420 01:30:43.196419  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.196979  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.197002  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.197763  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.198072  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.200177  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.200480  141927 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:30:43.200497  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:30:43.200516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.204078  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.204128  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.204154  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.204178  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.204275  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.204456  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.204582  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.375731  141927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:30:43.424911  141927 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-907988" to be "Ready" ...
	I0420 01:30:43.436729  141927 node_ready.go:49] node "default-k8s-diff-port-907988" has status "Ready":"True"
	I0420 01:30:43.436750  141927 node_ready.go:38] duration metric: took 11.810027ms for node "default-k8s-diff-port-907988" to be "Ready" ...
	I0420 01:30:43.436759  141927 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:43.445452  141927 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:43.497224  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:30:43.526236  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:30:43.527573  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:30:43.527597  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:30:43.591844  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:30:43.591872  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:30:43.655692  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:30:43.655721  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:30:43.824523  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:30:44.808651  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.311370016s)
	I0420 01:30:44.808721  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.808724  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.282444767s)
	I0420 01:30:44.808735  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.808767  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.808783  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809052  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809066  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809074  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.809081  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809144  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809162  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809170  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.809179  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809626  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809635  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809647  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809655  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809626  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Closing plugin on server side
	I0420 01:30:44.833935  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.833963  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.834326  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.834348  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.316084  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.491512905s)
	I0420 01:30:45.316157  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:45.316177  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:45.316514  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:45.316539  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.316593  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:45.316610  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:45.316910  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:45.316989  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.317007  141927 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-907988"
	I0420 01:30:45.316906  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Closing plugin on server side
	I0420 01:30:45.319289  141927 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0420 01:30:42.040865  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:44.042663  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:45.320468  141927 addons.go:505] duration metric: took 2.195343987s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0420 01:30:45.453717  141927 pod_ready.go:102] pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:45.952010  141927 pod_ready.go:92] pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.952032  141927 pod_ready.go:81] duration metric: took 2.506556645s for pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.952040  141927 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.957512  141927 pod_ready.go:92] pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.957533  141927 pod_ready.go:81] duration metric: took 5.486362ms for pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.957541  141927 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.962790  141927 pod_ready.go:92] pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.962810  141927 pod_ready.go:81] duration metric: took 5.261485ms for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.962821  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.968720  141927 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.968743  141927 pod_ready.go:81] duration metric: took 5.914425ms for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.968754  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.976930  141927 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.976946  141927 pod_ready.go:81] duration metric: took 8.183898ms for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.976954  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jt8wr" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.350179  141927 pod_ready.go:92] pod "kube-proxy-jt8wr" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:46.350203  141927 pod_ready.go:81] duration metric: took 373.241134ms for pod "kube-proxy-jt8wr" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.350212  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.749542  141927 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:46.749566  141927 pod_ready.go:81] duration metric: took 399.34726ms for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.749573  141927 pod_ready.go:38] duration metric: took 3.312805349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:46.749587  141927 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:30:46.749647  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:46.785318  141927 api_server.go:72] duration metric: took 3.660207577s to wait for apiserver process to appear ...
	I0420 01:30:46.785349  141927 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:30:46.785373  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:30:46.793933  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 200:
	ok
	I0420 01:30:46.794890  141927 api_server.go:141] control plane version: v1.30.0
	I0420 01:30:46.794911  141927 api_server.go:131] duration metric: took 9.555146ms to wait for apiserver health ...
	I0420 01:30:46.794920  141927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:30:46.953036  141927 system_pods.go:59] 9 kube-system pods found
	I0420 01:30:46.953066  141927 system_pods.go:61] "coredns-7db6d8ff4d-g2nzn" [d07ba546-0251-4862-ad1b-0c3d5ee7b1f3] Running
	I0420 01:30:46.953070  141927 system_pods.go:61] "coredns-7db6d8ff4d-p8dhp" [4bf589b6-f54b-4615-b95e-b95c89766e24] Running
	I0420 01:30:46.953074  141927 system_pods.go:61] "etcd-default-k8s-diff-port-907988" [f2711b7c-9d31-4586-bcf0-345ef2c9e62a] Running
	I0420 01:30:46.953077  141927 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-907988" [7a4fccc8-90d5-4467-8925-df5d8e1e128a] Running
	I0420 01:30:46.953081  141927 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-907988" [68350b12-3244-4565-ab06-6d7ad5876935] Running
	I0420 01:30:46.953085  141927 system_pods.go:61] "kube-proxy-jt8wr" [a9ddf3ce-29f8-437d-bd31-89411c135012] Running
	I0420 01:30:46.953088  141927 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-907988" [f0ff044b-0c2a-4105-9373-34abfbf6b68a] Running
	I0420 01:30:46.953094  141927 system_pods.go:61] "metrics-server-569cc877fc-6rgpj" [70cba472-11c4-4604-a4ad-3575ccedf005] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:30:46.953098  141927 system_pods.go:61] "storage-provisioner" [739478ce-5d74-4be0-8a39-d80245d8aa8a] Running
	I0420 01:30:46.953108  141927 system_pods.go:74] duration metric: took 158.182751ms to wait for pod list to return data ...
	I0420 01:30:46.953116  141927 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:30:47.151205  141927 default_sa.go:45] found service account: "default"
	I0420 01:30:47.151245  141927 default_sa.go:55] duration metric: took 198.121475ms for default service account to be created ...
	I0420 01:30:47.151274  141927 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:30:47.354321  141927 system_pods.go:86] 9 kube-system pods found
	I0420 01:30:47.354348  141927 system_pods.go:89] "coredns-7db6d8ff4d-g2nzn" [d07ba546-0251-4862-ad1b-0c3d5ee7b1f3] Running
	I0420 01:30:47.354353  141927 system_pods.go:89] "coredns-7db6d8ff4d-p8dhp" [4bf589b6-f54b-4615-b95e-b95c89766e24] Running
	I0420 01:30:47.354358  141927 system_pods.go:89] "etcd-default-k8s-diff-port-907988" [f2711b7c-9d31-4586-bcf0-345ef2c9e62a] Running
	I0420 01:30:47.354364  141927 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-907988" [7a4fccc8-90d5-4467-8925-df5d8e1e128a] Running
	I0420 01:30:47.354369  141927 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-907988" [68350b12-3244-4565-ab06-6d7ad5876935] Running
	I0420 01:30:47.354373  141927 system_pods.go:89] "kube-proxy-jt8wr" [a9ddf3ce-29f8-437d-bd31-89411c135012] Running
	I0420 01:30:47.354376  141927 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-907988" [f0ff044b-0c2a-4105-9373-34abfbf6b68a] Running
	I0420 01:30:47.354383  141927 system_pods.go:89] "metrics-server-569cc877fc-6rgpj" [70cba472-11c4-4604-a4ad-3575ccedf005] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:30:47.354387  141927 system_pods.go:89] "storage-provisioner" [739478ce-5d74-4be0-8a39-d80245d8aa8a] Running
	I0420 01:30:47.354395  141927 system_pods.go:126] duration metric: took 203.115923ms to wait for k8s-apps to be running ...
	I0420 01:30:47.354403  141927 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:30:47.354452  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:47.370946  141927 system_svc.go:56] duration metric: took 16.532953ms WaitForService to wait for kubelet
	I0420 01:30:47.370977  141927 kubeadm.go:576] duration metric: took 4.245884115s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:30:47.370997  141927 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:30:47.550097  141927 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:30:47.550127  141927 node_conditions.go:123] node cpu capacity is 2
	I0420 01:30:47.550138  141927 node_conditions.go:105] duration metric: took 179.136105ms to run NodePressure ...
	I0420 01:30:47.550150  141927 start.go:240] waiting for startup goroutines ...
	I0420 01:30:47.550156  141927 start.go:245] waiting for cluster config update ...
	I0420 01:30:47.550167  141927 start.go:254] writing updated cluster config ...
	I0420 01:30:47.550493  141927 ssh_runner.go:195] Run: rm -f paused
	I0420 01:30:47.614715  141927 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:30:47.616658  141927 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-907988" cluster and "default" namespace by default
	I0420 01:30:47.623645  142057 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.736926697s)
	I0420 01:30:47.623716  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:47.648132  142057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:47.662521  142057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:47.674241  142057 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:47.674265  142057 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:47.674311  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:30:47.684981  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:47.685037  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:47.696549  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:30:47.706838  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:47.706885  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:47.717387  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:30:47.732194  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:47.732252  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:47.743425  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:30:47.756579  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:47.756629  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:47.769210  142057 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:47.832909  142057 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:30:47.832972  142057 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:47.987090  142057 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:47.987209  142057 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:47.987380  142057 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:48.253287  142057 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:48.255451  142057 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:48.255552  142057 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:48.255657  142057 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:48.255767  142057 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:48.255880  142057 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:48.255992  142057 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:48.256076  142057 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:48.256170  142057 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:48.256250  142057 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:48.256344  142057 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:48.256445  142057 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:48.256500  142057 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:48.256563  142057 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:48.346357  142057 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:48.602240  142057 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:30:48.741597  142057 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:49.086311  142057 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:49.284340  142057 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:49.284671  142057 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:49.287663  142057 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:46.540199  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:48.540848  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:50.541579  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:49.289305  142057 out.go:204]   - Booting up control plane ...
	I0420 01:30:49.289430  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:49.289558  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:49.289646  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:49.309520  142057 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:49.311328  142057 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:49.311389  142057 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:49.448766  142057 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:30:49.448889  142057 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:30:49.950225  142057 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.460713ms
	I0420 01:30:49.950316  142057 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:30:55.452587  142057 kubeadm.go:309] [api-check] The API server is healthy after 5.502061843s
	I0420 01:30:55.466768  142057 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:30:55.500892  142057 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:30:55.538376  142057 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:30:55.538631  142057 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-269507 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:30:55.559344  142057 kubeadm.go:309] [bootstrap-token] Using token: jtn2hn.nnhc9vssv65463xy
	I0420 01:30:52.542748  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:55.040878  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:55.560872  142057 out.go:204]   - Configuring RBAC rules ...
	I0420 01:30:55.561022  142057 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:30:55.575617  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:30:55.583307  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:30:55.586398  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:30:55.596138  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:30:55.599717  142057 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:30:55.861367  142057 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:30:56.310991  142057 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:30:56.860904  142057 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:30:56.860939  142057 kubeadm.go:309] 
	I0420 01:30:56.861051  142057 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:30:56.861077  142057 kubeadm.go:309] 
	I0420 01:30:56.861180  142057 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:30:56.861201  142057 kubeadm.go:309] 
	I0420 01:30:56.861232  142057 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:30:56.861345  142057 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:30:56.861438  142057 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:30:56.861454  142057 kubeadm.go:309] 
	I0420 01:30:56.861534  142057 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:30:56.861544  142057 kubeadm.go:309] 
	I0420 01:30:56.861628  142057 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:30:56.861644  142057 kubeadm.go:309] 
	I0420 01:30:56.861728  142057 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:30:56.861822  142057 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:30:56.861895  142057 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:30:56.861923  142057 kubeadm.go:309] 
	I0420 01:30:56.862120  142057 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:30:56.862228  142057 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:30:56.862246  142057 kubeadm.go:309] 
	I0420 01:30:56.862371  142057 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jtn2hn.nnhc9vssv65463xy \
	I0420 01:30:56.862532  142057 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:30:56.862571  142057 kubeadm.go:309] 	--control-plane 
	I0420 01:30:56.862580  142057 kubeadm.go:309] 
	I0420 01:30:56.862700  142057 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:30:56.862724  142057 kubeadm.go:309] 
	I0420 01:30:56.862827  142057 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jtn2hn.nnhc9vssv65463xy \
	I0420 01:30:56.862955  142057 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:30:56.863259  142057 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:30:56.863343  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:30:56.863358  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:30:56.865193  142057 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:30:57.541555  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:00.040222  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:56.866515  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:30:56.880013  142057 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:30:56.900677  142057 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:30:56.900773  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:56.900809  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-269507 minikube.k8s.io/updated_at=2024_04_20T01_30_56_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=embed-certs-269507 minikube.k8s.io/primary=true
	I0420 01:30:56.942362  142057 ops.go:34] apiserver oom_adj: -16
	I0420 01:30:57.124807  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:57.625201  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:58.125867  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:58.625845  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:59.124923  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:59.625004  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:00.125467  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:00.625081  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:01.125446  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.539751  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:04.540090  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:01.625279  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.125084  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.625048  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:03.125567  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:03.625428  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:04.125592  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:04.625874  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:05.125031  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:05.625698  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:06.125620  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.054009  142411 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:31:07.054375  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:07.054708  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:06.625682  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.125909  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.625563  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:08.125451  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:08.625265  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.125677  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.625433  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.720318  142057 kubeadm.go:1107] duration metric: took 12.81961115s to wait for elevateKubeSystemPrivileges
	W0420 01:31:09.720362  142057 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:31:09.720373  142057 kubeadm.go:393] duration metric: took 5m17.067399347s to StartCluster
	I0420 01:31:09.720426  142057 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:31:09.720552  142057 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:31:09.722646  142057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:31:09.722904  142057 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:31:09.724771  142057 out.go:177] * Verifying Kubernetes components...
	I0420 01:31:09.722979  142057 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:31:09.723175  142057 config.go:182] Loaded profile config "embed-certs-269507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:31:09.724863  142057 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-269507"
	I0420 01:31:09.726208  142057 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-269507"
	W0420 01:31:09.726229  142057 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:31:09.724870  142057 addons.go:69] Setting default-storageclass=true in profile "embed-certs-269507"
	I0420 01:31:09.726270  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.726289  142057 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-269507"
	I0420 01:31:09.724889  142057 addons.go:69] Setting metrics-server=true in profile "embed-certs-269507"
	I0420 01:31:09.726351  142057 addons.go:234] Setting addon metrics-server=true in "embed-certs-269507"
	W0420 01:31:09.726365  142057 addons.go:243] addon metrics-server should already be in state true
	I0420 01:31:09.726395  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.726159  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:31:09.726699  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726737  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726771  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726785  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.726803  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.726793  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.742932  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41221
	I0420 01:31:09.743143  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I0420 01:31:09.743375  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.743666  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.743951  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.743968  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.744102  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.744120  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.744439  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.744497  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.745152  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.745162  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.745178  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.745195  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.745923  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40633
	I0420 01:31:09.746441  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.747173  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.747202  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.747637  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.747934  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.751736  142057 addons.go:234] Setting addon default-storageclass=true in "embed-certs-269507"
	W0420 01:31:09.751760  142057 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:31:09.751791  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.752174  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.752199  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.763296  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40627
	I0420 01:31:09.763475  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41617
	I0420 01:31:09.764103  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.764119  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.764635  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.764656  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.764807  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.764821  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.765353  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.765369  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.765562  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.766352  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.767675  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.769455  142057 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:31:09.768866  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.770529  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:31:09.770596  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:31:09.770618  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.771959  142057 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:31:07.039635  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:09.040381  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:09.772109  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
	I0420 01:31:09.773531  142057 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:31:09.773545  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:31:09.773560  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.773989  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.774697  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.774711  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.774889  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.775069  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.775522  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.775550  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.775770  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.775840  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.775855  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.775973  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.776144  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.776283  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.776967  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.777306  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.777376  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.777621  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.777811  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.777949  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.778092  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.791609  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37301
	I0420 01:31:09.792008  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.792475  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.792492  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.792811  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.793110  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.794743  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.795008  142057 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:31:09.795023  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:31:09.795037  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.797655  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.798120  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.798144  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.798394  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.798603  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.798745  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.798888  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.957088  142057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:31:10.012344  142057 node_ready.go:35] waiting up to 6m0s for node "embed-certs-269507" to be "Ready" ...
	I0420 01:31:10.023887  142057 node_ready.go:49] node "embed-certs-269507" has status "Ready":"True"
	I0420 01:31:10.023917  142057 node_ready.go:38] duration metric: took 11.536403ms for node "embed-certs-269507" to be "Ready" ...
	I0420 01:31:10.023929  142057 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:10.035096  142057 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:10.210022  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:31:10.222715  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:31:10.251807  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:31:10.251836  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:31:10.342638  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:31:10.342664  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:31:10.480676  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:31:10.480700  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:31:10.655186  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:31:11.331066  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.121005107s)
	I0420 01:31:11.331125  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.108375538s)
	I0420 01:31:11.331139  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331152  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331165  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331181  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331530  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331601  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331611  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331641  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331664  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331681  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331684  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331692  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331699  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331646  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331932  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331959  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331979  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331991  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331989  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.332003  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.364269  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.364296  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.364641  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.364667  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.364671  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.809229  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.154002194s)
	I0420 01:31:11.809282  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.809301  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.809618  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.809676  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.809688  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.809705  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.809717  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.809954  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.809983  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.810001  142057 addons.go:470] Verifying addon metrics-server=true in "embed-certs-269507"
	I0420 01:31:11.810004  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.811610  142057 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0420 01:31:12.055506  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:12.055793  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:11.813049  142057 addons.go:505] duration metric: took 2.090078148s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0420 01:31:12.044618  142057 pod_ready.go:102] pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:12.565519  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.565543  142057 pod_ready.go:81] duration metric: took 2.530392572s for pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.565552  142057 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.577986  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.578011  142057 pod_ready.go:81] duration metric: took 12.452506ms for pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.578020  142057 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.595104  142057 pod_ready.go:92] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.595129  142057 pod_ready.go:81] duration metric: took 17.103577ms for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.595139  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.602502  142057 pod_ready.go:92] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.602524  142057 pod_ready.go:81] duration metric: took 7.377832ms for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.602538  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.608443  142057 pod_ready.go:92] pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.608462  142057 pod_ready.go:81] duration metric: took 5.916781ms for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.608471  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4x66x" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.939418  142057 pod_ready.go:92] pod "kube-proxy-4x66x" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.939444  142057 pod_ready.go:81] duration metric: took 330.966964ms for pod "kube-proxy-4x66x" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.939454  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:13.341528  142057 pod_ready.go:92] pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:13.341556  142057 pod_ready.go:81] duration metric: took 402.093841ms for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:13.341565  142057 pod_ready.go:38] duration metric: took 3.317622631s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:13.341583  142057 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:31:13.341648  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:31:13.361938  142057 api_server.go:72] duration metric: took 3.638999445s to wait for apiserver process to appear ...
	I0420 01:31:13.361967  142057 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:31:13.361987  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:31:13.367149  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0420 01:31:13.368215  142057 api_server.go:141] control plane version: v1.30.0
	I0420 01:31:13.368243  142057 api_server.go:131] duration metric: took 6.268859ms to wait for apiserver health ...
	I0420 01:31:13.368254  142057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:31:13.545177  142057 system_pods.go:59] 9 kube-system pods found
	I0420 01:31:13.545203  142057 system_pods.go:61] "coredns-7db6d8ff4d-ltzhp" [fca2da30-b908-46fc-a028-d43a17c6307e] Running
	I0420 01:31:13.545207  142057 system_pods.go:61] "coredns-7db6d8ff4d-mpf5l" [331105fe-dd08-409f-9b2d-658b958cd1a2] Running
	I0420 01:31:13.545212  142057 system_pods.go:61] "etcd-embed-certs-269507" [7dc38a73-8614-42d0-afb5-f2ffdbb8ef1b] Running
	I0420 01:31:13.545215  142057 system_pods.go:61] "kube-apiserver-embed-certs-269507" [c6741448-01ad-4be4-a120-c69b27fbc818] Running
	I0420 01:31:13.545219  142057 system_pods.go:61] "kube-controller-manager-embed-certs-269507" [003fc040-4032-4ff8-99af-71305dae664c] Running
	I0420 01:31:13.545222  142057 system_pods.go:61] "kube-proxy-4x66x" [75da8306-56f8-49bf-a2e7-cf5d4877dc16] Running
	I0420 01:31:13.545224  142057 system_pods.go:61] "kube-scheduler-embed-certs-269507" [86a64ec5-dd53-4702-9dea-8dbab58b38e3] Running
	I0420 01:31:13.545230  142057 system_pods.go:61] "metrics-server-569cc877fc-jwbst" [4d13a078-f3cd-43c2-8f15-fe5c36445294] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:31:13.545233  142057 system_pods.go:61] "storage-provisioner" [8eee97ab-bb31-4a3d-be80-845b6545e897] Running
	I0420 01:31:13.545242  142057 system_pods.go:74] duration metric: took 176.980813ms to wait for pod list to return data ...
	I0420 01:31:13.545249  142057 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:31:13.739865  142057 default_sa.go:45] found service account: "default"
	I0420 01:31:13.739892  142057 default_sa.go:55] duration metric: took 194.636223ms for default service account to be created ...
	I0420 01:31:13.739903  142057 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:31:13.942758  142057 system_pods.go:86] 9 kube-system pods found
	I0420 01:31:13.942785  142057 system_pods.go:89] "coredns-7db6d8ff4d-ltzhp" [fca2da30-b908-46fc-a028-d43a17c6307e] Running
	I0420 01:31:13.942793  142057 system_pods.go:89] "coredns-7db6d8ff4d-mpf5l" [331105fe-dd08-409f-9b2d-658b958cd1a2] Running
	I0420 01:31:13.942801  142057 system_pods.go:89] "etcd-embed-certs-269507" [7dc38a73-8614-42d0-afb5-f2ffdbb8ef1b] Running
	I0420 01:31:13.942812  142057 system_pods.go:89] "kube-apiserver-embed-certs-269507" [c6741448-01ad-4be4-a120-c69b27fbc818] Running
	I0420 01:31:13.942819  142057 system_pods.go:89] "kube-controller-manager-embed-certs-269507" [003fc040-4032-4ff8-99af-71305dae664c] Running
	I0420 01:31:13.942829  142057 system_pods.go:89] "kube-proxy-4x66x" [75da8306-56f8-49bf-a2e7-cf5d4877dc16] Running
	I0420 01:31:13.942835  142057 system_pods.go:89] "kube-scheduler-embed-certs-269507" [86a64ec5-dd53-4702-9dea-8dbab58b38e3] Running
	I0420 01:31:13.942846  142057 system_pods.go:89] "metrics-server-569cc877fc-jwbst" [4d13a078-f3cd-43c2-8f15-fe5c36445294] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:31:13.942854  142057 system_pods.go:89] "storage-provisioner" [8eee97ab-bb31-4a3d-be80-845b6545e897] Running
	I0420 01:31:13.942863  142057 system_pods.go:126] duration metric: took 202.954629ms to wait for k8s-apps to be running ...
	I0420 01:31:13.942873  142057 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:31:13.942926  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:31:13.962754  142057 system_svc.go:56] duration metric: took 19.872903ms WaitForService to wait for kubelet
	I0420 01:31:13.962781  142057 kubeadm.go:576] duration metric: took 4.239850872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:31:13.962802  142057 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:31:14.139800  142057 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:31:14.139834  142057 node_conditions.go:123] node cpu capacity is 2
	I0420 01:31:14.139848  142057 node_conditions.go:105] duration metric: took 177.041675ms to run NodePressure ...
	I0420 01:31:14.139862  142057 start.go:240] waiting for startup goroutines ...
	I0420 01:31:14.139872  142057 start.go:245] waiting for cluster config update ...
	I0420 01:31:14.139886  142057 start.go:254] writing updated cluster config ...
	I0420 01:31:14.140201  142057 ssh_runner.go:195] Run: rm -f paused
	I0420 01:31:14.190985  142057 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:31:14.193207  142057 out.go:177] * Done! kubectl is now configured to use "embed-certs-269507" cluster and "default" namespace by default
	I0420 01:31:11.040724  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:13.043491  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:15.540182  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:17.540894  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:19.541858  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:22.056094  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:22.056315  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:22.039484  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:24.043137  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:26.043262  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:28.540379  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:30.540568  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:32.543371  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:35.040187  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:37.541354  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:40.039779  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:42.057024  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:42.057278  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:42.040147  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:44.540170  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:46.540576  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:48.543604  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:51.034230  141746 pod_ready.go:81] duration metric: took 4m0.001077028s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" ...
	E0420 01:31:51.034258  141746 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:31:51.034280  141746 pod_ready.go:38] duration metric: took 4m12.046687249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:51.034308  141746 kubeadm.go:591] duration metric: took 4m55.947094434s to restartPrimaryControlPlane
	W0420 01:31:51.034367  141746 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:31:51.034400  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:32:22.058965  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:32:22.059213  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:32:22.059231  142411 kubeadm.go:309] 
	I0420 01:32:22.059284  142411 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:32:22.059341  142411 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:32:22.059351  142411 kubeadm.go:309] 
	I0420 01:32:22.059398  142411 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:32:22.059449  142411 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:32:22.059581  142411 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:32:22.059606  142411 kubeadm.go:309] 
	I0420 01:32:22.059693  142411 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:32:22.059725  142411 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:32:22.059796  142411 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:32:22.059821  142411 kubeadm.go:309] 
	I0420 01:32:22.059916  142411 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:32:22.060046  142411 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:32:22.060068  142411 kubeadm.go:309] 
	I0420 01:32:22.060225  142411 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:32:22.060371  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:32:22.060498  142411 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:32:22.060624  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:32:22.060643  142411 kubeadm.go:309] 
	I0420 01:32:22.061155  142411 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:22.061294  142411 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:32:22.061403  142411 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0420 01:32:22.061569  142411 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0420 01:32:22.061628  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:32:23.211059  142411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.149398853s)
	I0420 01:32:23.211147  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:23.228140  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:32:23.240832  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:32:23.240868  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:32:23.240912  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:32:23.252674  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:32:23.252735  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:32:23.264128  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:32:23.274998  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:32:23.275059  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:32:23.286449  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.297377  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:32:23.297452  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.308971  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:32:23.320775  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:32:23.320842  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:32:23.333601  142411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:32:23.490252  141746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.455825605s)
	I0420 01:32:23.490330  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:23.515027  141746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:32:23.528835  141746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:32:23.542901  141746 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:32:23.542927  141746 kubeadm.go:156] found existing configuration files:
	
	I0420 01:32:23.542969  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:32:23.554931  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:32:23.555006  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:32:23.570665  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:32:23.583505  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:32:23.583576  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:32:23.595835  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.607468  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:32:23.607538  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.620629  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:32:23.634141  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:32:23.634222  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:32:23.648360  141746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:32:23.727697  141746 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:32:23.727825  141746 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:32:23.899280  141746 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:32:23.899376  141746 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:32:23.899456  141746 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:32:24.139299  141746 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:32:24.141410  141746 out.go:204]   - Generating certificates and keys ...
	I0420 01:32:24.141522  141746 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:32:24.141618  141746 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:32:24.141719  141746 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:32:24.141814  141746 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:32:24.141912  141746 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:32:24.141987  141746 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:32:24.142076  141746 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:32:24.142172  141746 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:32:24.142348  141746 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:32:24.142589  141746 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:32:24.142757  141746 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:32:24.142990  141746 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:32:24.247270  141746 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:32:24.326535  141746 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:32:24.538489  141746 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:32:24.594810  141746 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:32:24.712812  141746 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:32:24.713304  141746 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:32:24.719376  141746 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:32:24.721510  141746 out.go:204]   - Booting up control plane ...
	I0420 01:32:24.721649  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:32:24.721781  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:32:24.722470  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:32:24.748410  141746 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:32:24.750247  141746 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:32:24.750320  141746 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:32:24.906734  141746 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:32:24.906859  141746 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:32:25.409625  141746 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.844847ms
	I0420 01:32:25.409771  141746 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:32:23.603058  142411 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:30.912062  141746 kubeadm.go:309] [api-check] The API server is healthy after 5.502434175s
	I0420 01:32:30.935231  141746 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:32:30.954860  141746 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:32:30.990255  141746 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:32:30.990480  141746 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-338118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:32:31.004218  141746 kubeadm.go:309] [bootstrap-token] Using token: 6ub3et.0wyu42zodual4kt8
	I0420 01:32:31.005771  141746 out.go:204]   - Configuring RBAC rules ...
	I0420 01:32:31.005875  141746 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:32:31.011978  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:32:31.020750  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:32:31.024958  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:32:31.032499  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:32:31.037128  141746 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:32:31.320324  141746 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:32:31.761773  141746 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:32:32.322540  141746 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:32:32.322563  141746 kubeadm.go:309] 
	I0420 01:32:32.322633  141746 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:32:32.322648  141746 kubeadm.go:309] 
	I0420 01:32:32.322728  141746 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:32:32.322737  141746 kubeadm.go:309] 
	I0420 01:32:32.322763  141746 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:32:32.322833  141746 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:32:32.322906  141746 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:32:32.322918  141746 kubeadm.go:309] 
	I0420 01:32:32.323005  141746 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:32:32.323015  141746 kubeadm.go:309] 
	I0420 01:32:32.323083  141746 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:32:32.323110  141746 kubeadm.go:309] 
	I0420 01:32:32.323184  141746 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:32:32.323304  141746 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:32:32.323362  141746 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:32:32.323372  141746 kubeadm.go:309] 
	I0420 01:32:32.323522  141746 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:32:32.323660  141746 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:32:32.323677  141746 kubeadm.go:309] 
	I0420 01:32:32.323765  141746 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6ub3et.0wyu42zodual4kt8 \
	I0420 01:32:32.323916  141746 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:32:32.323948  141746 kubeadm.go:309] 	--control-plane 
	I0420 01:32:32.323957  141746 kubeadm.go:309] 
	I0420 01:32:32.324035  141746 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:32:32.324049  141746 kubeadm.go:309] 
	I0420 01:32:32.324201  141746 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6ub3et.0wyu42zodual4kt8 \
	I0420 01:32:32.324348  141746 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:32:32.324967  141746 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:32.325210  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:32:32.325228  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:32:32.327624  141746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:32:32.329029  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:32:32.344181  141746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:32:32.368978  141746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:32:32.369052  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:32.369086  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-338118 minikube.k8s.io/updated_at=2024_04_20T01_32_32_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=no-preload-338118 minikube.k8s.io/primary=true
	I0420 01:32:32.579160  141746 ops.go:34] apiserver oom_adj: -16
	I0420 01:32:32.579218  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:33.079458  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:33.579498  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:34.079957  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:34.579520  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:35.079902  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:35.579955  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:36.079525  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:36.579612  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:37.079831  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:37.579989  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:38.079481  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:38.579798  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:39.080239  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:39.579654  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:40.080267  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:40.579837  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:41.079840  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:41.579347  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:42.079368  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:42.579641  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:43.079257  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:43.579647  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.079317  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.580002  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.698993  141746 kubeadm.go:1107] duration metric: took 12.330007154s to wait for elevateKubeSystemPrivileges
	W0420 01:32:44.699036  141746 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:32:44.699045  141746 kubeadm.go:393] duration metric: took 5m49.674421659s to StartCluster
	I0420 01:32:44.699064  141746 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:32:44.699166  141746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:32:44.700731  141746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:32:44.700982  141746 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:32:44.702752  141746 out.go:177] * Verifying Kubernetes components...
	I0420 01:32:44.701040  141746 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:32:44.701201  141746 config.go:182] Loaded profile config "no-preload-338118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:32:44.704065  141746 addons.go:69] Setting storage-provisioner=true in profile "no-preload-338118"
	I0420 01:32:44.704078  141746 addons.go:69] Setting metrics-server=true in profile "no-preload-338118"
	I0420 01:32:44.704077  141746 addons.go:69] Setting default-storageclass=true in profile "no-preload-338118"
	I0420 01:32:44.704099  141746 addons.go:234] Setting addon storage-provisioner=true in "no-preload-338118"
	W0420 01:32:44.704105  141746 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:32:44.704114  141746 addons.go:234] Setting addon metrics-server=true in "no-preload-338118"
	I0420 01:32:44.704113  141746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-338118"
	W0420 01:32:44.704124  141746 addons.go:243] addon metrics-server should already be in state true
	I0420 01:32:44.704151  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.704157  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.704069  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:32:44.704452  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704485  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.704503  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704521  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704535  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.704545  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.720663  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34001
	I0420 01:32:44.720685  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0420 01:32:44.721210  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.721222  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.721746  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.721766  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.721901  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.721925  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.722282  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.722311  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.722860  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.722860  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.722889  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.722914  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.723194  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39919
	I0420 01:32:44.723775  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.724401  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.724427  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.724790  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.724975  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.728728  141746 addons.go:234] Setting addon default-storageclass=true in "no-preload-338118"
	W0420 01:32:44.728751  141746 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:32:44.728780  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.729136  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.729161  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.738505  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37139
	I0420 01:32:44.738893  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.739388  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.739409  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.739916  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.740120  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.741929  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37217
	I0420 01:32:44.742090  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.744131  141746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:32:44.742538  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.745561  141746 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:32:44.745579  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:32:44.745597  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.744662  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.745640  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.745994  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.746345  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.747491  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0420 01:32:44.747878  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.748594  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.748731  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.748752  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.750445  141746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:32:44.749050  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.749380  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.749990  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.752010  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:32:44.752029  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:32:44.752046  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.752131  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.752155  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.752307  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.752479  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.752647  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.752676  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.752676  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.754727  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.755188  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.755216  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.755497  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.755696  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.755866  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.756034  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.768442  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I0420 01:32:44.768887  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.769453  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.769473  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.769852  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.770359  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.772155  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.772443  141746 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:32:44.772651  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:32:44.772686  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.775775  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.776177  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.776205  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.776313  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.776492  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.776667  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.776832  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.930301  141746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:32:44.948472  141746 node_ready.go:35] waiting up to 6m0s for node "no-preload-338118" to be "Ready" ...
	I0420 01:32:44.960637  141746 node_ready.go:49] node "no-preload-338118" has status "Ready":"True"
	I0420 01:32:44.960664  141746 node_ready.go:38] duration metric: took 12.15407ms for node "no-preload-338118" to be "Ready" ...
	I0420 01:32:44.960676  141746 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:32:44.971143  141746 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.980894  141746 pod_ready.go:92] pod "etcd-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:44.980917  141746 pod_ready.go:81] duration metric: took 9.749994ms for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.980929  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.995192  141746 pod_ready.go:92] pod "kube-apiserver-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:44.995217  141746 pod_ready.go:81] duration metric: took 14.279681ms for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.995229  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.004302  141746 pod_ready.go:92] pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:45.004324  141746 pod_ready.go:81] duration metric: took 9.086713ms for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.004338  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f57d9" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.062482  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:32:45.066314  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:32:45.066334  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:32:45.093830  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:32:45.148558  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:32:45.148600  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:32:45.235321  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:32:45.235349  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:32:45.275661  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:32:46.686292  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.592425062s)
	I0420 01:32:46.686344  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.623774979s)
	I0420 01:32:46.686360  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686375  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686385  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686401  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686822  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.686897  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.686911  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686920  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686835  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.686839  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687001  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.687013  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.687027  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686850  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.687153  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687166  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.687359  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687373  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.697988  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.422274698s)
	I0420 01:32:46.698045  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.698059  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.698320  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.698339  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.698351  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.698359  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.698568  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.698658  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.698676  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.698687  141746 addons.go:470] Verifying addon metrics-server=true in "no-preload-338118"
	I0420 01:32:46.733170  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.733198  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.733551  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.733573  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.733605  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.735297  141746 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0420 01:32:46.736665  141746 addons.go:505] duration metric: took 2.035625149s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0420 01:32:47.011271  141746 pod_ready.go:92] pod "kube-proxy-f57d9" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:47.011299  141746 pod_ready.go:81] duration metric: took 2.006954798s for pod "kube-proxy-f57d9" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.011309  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.025378  141746 pod_ready.go:92] pod "kube-scheduler-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:47.025408  141746 pod_ready.go:81] duration metric: took 14.090474ms for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.025421  141746 pod_ready.go:38] duration metric: took 2.064731781s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:32:47.025443  141746 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:32:47.025511  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:32:47.052680  141746 api_server.go:72] duration metric: took 2.351656586s to wait for apiserver process to appear ...
	I0420 01:32:47.052712  141746 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:32:47.052738  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:32:47.061908  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 200:
	ok
	I0420 01:32:47.065615  141746 api_server.go:141] control plane version: v1.30.0
	I0420 01:32:47.065641  141746 api_server.go:131] duration metric: took 12.920384ms to wait for apiserver health ...
	I0420 01:32:47.065651  141746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:32:47.158039  141746 system_pods.go:59] 9 kube-system pods found
	I0420 01:32:47.158076  141746 system_pods.go:61] "coredns-7db6d8ff4d-8jvsz" [d83784a0-6942-4906-ba66-76d7fa25dc04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.158087  141746 system_pods.go:61] "coredns-7db6d8ff4d-lhnxg" [c0fb3119-abcb-4646-9aae-a54438a76adf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.158096  141746 system_pods.go:61] "etcd-no-preload-338118" [1ff1cf84-276b-45c4-9da9-8266ee15a4f6] Running
	I0420 01:32:47.158101  141746 system_pods.go:61] "kube-apiserver-no-preload-338118" [313150c1-d21e-43d5-8ae0-6331e5007a66] Running
	I0420 01:32:47.158107  141746 system_pods.go:61] "kube-controller-manager-no-preload-338118" [eef34e56-ed71-4e76-a732-341878f3f90d] Running
	I0420 01:32:47.158113  141746 system_pods.go:61] "kube-proxy-f57d9" [54252f52-9bb1-48a2-98e1-980f40fa727d] Running
	I0420 01:32:47.158117  141746 system_pods.go:61] "kube-scheduler-no-preload-338118" [4491c2f0-7b45-4c78-b91e-8fcbbcc890fd] Running
	I0420 01:32:47.158126  141746 system_pods.go:61] "metrics-server-569cc877fc-xbwdm" [798c7b61-a93d-4daf-a832-e15056a2ae24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:32:47.158134  141746 system_pods.go:61] "storage-provisioner" [51c12418-805f-4923-b7ab-4fa0fe07ec9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:32:47.158147  141746 system_pods.go:74] duration metric: took 92.489697ms to wait for pod list to return data ...
	I0420 01:32:47.158162  141746 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:32:47.351962  141746 default_sa.go:45] found service account: "default"
	I0420 01:32:47.352002  141746 default_sa.go:55] duration metric: took 193.830142ms for default service account to be created ...
	I0420 01:32:47.352016  141746 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:32:47.557471  141746 system_pods.go:86] 9 kube-system pods found
	I0420 01:32:47.557511  141746 system_pods.go:89] "coredns-7db6d8ff4d-8jvsz" [d83784a0-6942-4906-ba66-76d7fa25dc04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.557524  141746 system_pods.go:89] "coredns-7db6d8ff4d-lhnxg" [c0fb3119-abcb-4646-9aae-a54438a76adf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.557534  141746 system_pods.go:89] "etcd-no-preload-338118" [1ff1cf84-276b-45c4-9da9-8266ee15a4f6] Running
	I0420 01:32:47.557540  141746 system_pods.go:89] "kube-apiserver-no-preload-338118" [313150c1-d21e-43d5-8ae0-6331e5007a66] Running
	I0420 01:32:47.557547  141746 system_pods.go:89] "kube-controller-manager-no-preload-338118" [eef34e56-ed71-4e76-a732-341878f3f90d] Running
	I0420 01:32:47.557554  141746 system_pods.go:89] "kube-proxy-f57d9" [54252f52-9bb1-48a2-98e1-980f40fa727d] Running
	I0420 01:32:47.557564  141746 system_pods.go:89] "kube-scheduler-no-preload-338118" [4491c2f0-7b45-4c78-b91e-8fcbbcc890fd] Running
	I0420 01:32:47.557577  141746 system_pods.go:89] "metrics-server-569cc877fc-xbwdm" [798c7b61-a93d-4daf-a832-e15056a2ae24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:32:47.557589  141746 system_pods.go:89] "storage-provisioner" [51c12418-805f-4923-b7ab-4fa0fe07ec9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:32:47.557602  141746 system_pods.go:126] duration metric: took 205.577946ms to wait for k8s-apps to be running ...
	I0420 01:32:47.557615  141746 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:32:47.557674  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:47.577745  141746 system_svc.go:56] duration metric: took 20.111982ms WaitForService to wait for kubelet
	I0420 01:32:47.577774  141746 kubeadm.go:576] duration metric: took 2.876759476s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:32:47.577794  141746 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:32:47.753216  141746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:32:47.753246  141746 node_conditions.go:123] node cpu capacity is 2
	I0420 01:32:47.753257  141746 node_conditions.go:105] duration metric: took 175.457668ms to run NodePressure ...
	I0420 01:32:47.753269  141746 start.go:240] waiting for startup goroutines ...
	I0420 01:32:47.753275  141746 start.go:245] waiting for cluster config update ...
	I0420 01:32:47.753286  141746 start.go:254] writing updated cluster config ...
	I0420 01:32:47.753612  141746 ssh_runner.go:195] Run: rm -f paused
	I0420 01:32:47.804681  141746 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:32:47.806823  141746 out.go:177] * Done! kubectl is now configured to use "no-preload-338118" cluster and "default" namespace by default
	I0420 01:34:20.028550  142411 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:34:20.028769  142411 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0420 01:34:20.030749  142411 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:34:20.030826  142411 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:34:20.030947  142411 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:34:20.031078  142411 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:34:20.031217  142411 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:34:20.031319  142411 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:34:20.032927  142411 out.go:204]   - Generating certificates and keys ...
	I0420 01:34:20.033024  142411 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:34:20.033110  142411 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:34:20.033211  142411 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:34:20.033286  142411 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:34:20.033410  142411 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:34:20.033496  142411 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:34:20.033597  142411 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:34:20.033695  142411 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:34:20.033805  142411 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:34:20.033921  142411 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:34:20.033972  142411 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:34:20.034042  142411 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:34:20.034125  142411 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:34:20.034200  142411 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:34:20.034287  142411 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:34:20.034355  142411 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:34:20.034510  142411 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:34:20.034614  142411 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:34:20.034680  142411 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:34:20.034760  142411 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:34:20.036300  142411 out.go:204]   - Booting up control plane ...
	I0420 01:34:20.036380  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:34:20.036479  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:34:20.036583  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:34:20.036705  142411 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:34:20.036888  142411 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:34:20.036955  142411 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:34:20.037046  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037228  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037291  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037494  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037576  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037730  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037789  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037977  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.038044  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.038262  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.038284  142411 kubeadm.go:309] 
	I0420 01:34:20.038341  142411 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:34:20.038382  142411 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:34:20.038396  142411 kubeadm.go:309] 
	I0420 01:34:20.038443  142411 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:34:20.038476  142411 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:34:20.038612  142411 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:34:20.038625  142411 kubeadm.go:309] 
	I0420 01:34:20.038735  142411 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:34:20.038767  142411 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:34:20.038794  142411 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:34:20.038808  142411 kubeadm.go:309] 
	I0420 01:34:20.038902  142411 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:34:20.038977  142411 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:34:20.038987  142411 kubeadm.go:309] 
	I0420 01:34:20.039101  142411 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:34:20.039203  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:34:20.039274  142411 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:34:20.039342  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:34:20.039384  142411 kubeadm.go:309] 
	I0420 01:34:20.039417  142411 kubeadm.go:393] duration metric: took 8m0.622979268s to StartCluster
	I0420 01:34:20.039459  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:34:20.039514  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:34:20.090236  142411 cri.go:89] found id: ""
	I0420 01:34:20.090262  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.090270  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:34:20.090276  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:34:20.090331  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:34:20.133841  142411 cri.go:89] found id: ""
	I0420 01:34:20.133867  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.133875  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:34:20.133883  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:34:20.133955  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:34:20.176186  142411 cri.go:89] found id: ""
	I0420 01:34:20.176219  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.176230  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:34:20.176235  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:34:20.176295  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:34:20.214895  142411 cri.go:89] found id: ""
	I0420 01:34:20.214932  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.214944  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:34:20.214951  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:34:20.215018  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:34:20.257759  142411 cri.go:89] found id: ""
	I0420 01:34:20.257786  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.257795  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:34:20.257800  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:34:20.257857  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:34:20.298111  142411 cri.go:89] found id: ""
	I0420 01:34:20.298153  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.298164  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:34:20.298172  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:34:20.298226  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:34:20.333435  142411 cri.go:89] found id: ""
	I0420 01:34:20.333469  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.333481  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:34:20.333489  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:34:20.333554  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:34:20.370848  142411 cri.go:89] found id: ""
	I0420 01:34:20.370872  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.370880  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:34:20.370890  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:34:20.370902  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:34:20.425495  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:34:20.425536  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:34:20.442039  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:34:20.442066  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:34:20.523456  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:34:20.523483  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:34:20.523504  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:34:20.633387  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:34:20.633427  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0420 01:34:20.688731  142411 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0420 01:34:20.688783  142411 out.go:239] * 
	W0420 01:34:20.688839  142411 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:34:20.688862  142411 out.go:239] * 
	W0420 01:34:20.689758  142411 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0420 01:34:20.693376  142411 out.go:177] 
	W0420 01:34:20.694909  142411 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:34:20.694971  142411 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0420 01:34:20.695003  142411 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0420 01:34:20.696409  142411 out.go:177] 
	
	
	==> CRI-O <==
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.250032587Z" level=debug msg="Can't find fake.domain/registry.k8s.io/echoserver:1.4" file="server/image_status.go:97" id=142a71b0-5cc1-4b1d-b7d2-4daea042a70b name=/runtime.v1.ImageService/ImageStatus
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.250063753Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" file="server/image_status.go:111" id=142a71b0-5cc1-4b1d-b7d2-4daea042a70b name=/runtime.v1.ImageService/ImageStatus
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.250086999Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" file="server/image_status.go:33" id=142a71b0-5cc1-4b1d-b7d2-4daea042a70b name=/runtime.v1.ImageService/ImageStatus
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.250122154Z" level=debug msg="Response: &ImageStatusResponse{Image:nil,Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=142a71b0-5cc1-4b1d-b7d2-4daea042a70b name=/runtime.v1.ImageService/ImageStatus
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.299975477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=466ec564-a19c-41a3-9dcd-2aa2db6fa4a8 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.300073538Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=466ec564-a19c-41a3-9dcd-2aa2db6fa4a8 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.301829121Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc1769ce-e8be-427b-9922-88ec6878c715 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.302281018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577216302258339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc1769ce-e8be-427b-9922-88ec6878c715 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.303080383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68636579-33a6-4348-b553-396b9af9e455 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.303158102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68636579-33a6-4348-b553-396b9af9e455 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.303361007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f2d91a77303a1d1f78754f56e8285673fa3c82912968524268bfb82b6862551,PodSandboxId:1b2078fead88b76fc01ca7f4f074f851b9b2853cf803174f93ac997d33777513,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576671860326870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eee97ab-bb31-4a3d-be80-845b6545e897,},Annotations:map[string]string{io.kubernetes.container.hash: 85082f9b,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58f2094ebcfb7e4ca383a963f4d25356b7e36c999dd36a68860bdfa92b86c086,PodSandboxId:355e142949e4aeedc1349c185fbc3654ab3b0d991137f3acc7d4244c1d1a6207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576671126164065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mpf5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 331105fe-dd08-409f-9b2d-658b958cd1a2,},Annotations:map[string]string{io.kubernetes.container.hash: 36ae1744,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:461315561eb163aacbbaed2f1f488aa988fbe30040ac563863c1971ec2dfa4db,PodSandboxId:a7ab732b7acbf6ac6a83270716540afb22bbf82735a7e9ee6f008b0bd7fce058,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576671069787308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ltzhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
ca2da30-b908-46fc-a028-d43a17c6307e,},Annotations:map[string]string{io.kubernetes.container.hash: 26cea6eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72866d395a259b41dd1568062ae9f525efd098c649089030d7dbe358475b416,PodSandboxId:7a109603aadde1c036974353a80ab912dc9b05b75d5d168569946085f8061861,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt
:1713576670070973648,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4x66x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75da8306-56f8-49bf-a2e7-cf5d4877dc16,},Annotations:map[string]string{io.kubernetes.container.hash: ff495a6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e929f7269b44770eeed9b70db7bf4f6c1f9b43e4d3b6575e87fa13f4bf4a84e,PodSandboxId:faef3224e215842fb808283749bdc3849cd4418d90ea5322b30b53e16c3a9b78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576650473340132
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdd3b00bf785377bddc8fab82d6d99a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6d359f3a3d53d4edd8d3cf64481a586b3ab86d0a85e8ba6801990806ced8348,PodSandboxId:f984207e0813470103daec5dcbd25b7c89c868b8cbb0f729335c5f96f477a78c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576650491322267,Lab
els:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0587f14b2460eaf30de6c91e37166938,},Annotations:map[string]string{io.kubernetes.container.hash: 674a4080,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b85cfa748850504f7c20bbab2dc0000a90942d5a67a20269950485735cb292,PodSandboxId:d84eb958169e26f38279504c69921ef93b6c4df49a25416b8857176ca186a813,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576650457196099,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23ca2077f4fbb1e69605853c38ebffe8,},Annotations:map[string]string{io.kubernetes.container.hash: 293887a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c404e808b8cc8e4567f11f28c04624da4cf3a2f178f7e2146de9374c146072,PodSandboxId:93b7f48a0a1a553f1a77d60546e223d09ffdabedc551991be913d51d8e94b7f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576650449185388,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b975a1148b62f00792b68b5fc13bb267,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68636579-33a6-4348-b553-396b9af9e455 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.355060825Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c5d5a0e7-c62c-4030-aa86-6ab4f994c466 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.355205540Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c5d5a0e7-c62c-4030-aa86-6ab4f994c466 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.356457233Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77437920-50a5-446f-aead-f034b52a82d4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.357239693Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577216357209469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77437920-50a5-446f-aead-f034b52a82d4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.358552831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d845201-215f-4d31-9cc1-75e4bf164efa name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.358649691Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d845201-215f-4d31-9cc1-75e4bf164efa name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.358903501Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f2d91a77303a1d1f78754f56e8285673fa3c82912968524268bfb82b6862551,PodSandboxId:1b2078fead88b76fc01ca7f4f074f851b9b2853cf803174f93ac997d33777513,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576671860326870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eee97ab-bb31-4a3d-be80-845b6545e897,},Annotations:map[string]string{io.kubernetes.container.hash: 85082f9b,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58f2094ebcfb7e4ca383a963f4d25356b7e36c999dd36a68860bdfa92b86c086,PodSandboxId:355e142949e4aeedc1349c185fbc3654ab3b0d991137f3acc7d4244c1d1a6207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576671126164065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mpf5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 331105fe-dd08-409f-9b2d-658b958cd1a2,},Annotations:map[string]string{io.kubernetes.container.hash: 36ae1744,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:461315561eb163aacbbaed2f1f488aa988fbe30040ac563863c1971ec2dfa4db,PodSandboxId:a7ab732b7acbf6ac6a83270716540afb22bbf82735a7e9ee6f008b0bd7fce058,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576671069787308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ltzhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
ca2da30-b908-46fc-a028-d43a17c6307e,},Annotations:map[string]string{io.kubernetes.container.hash: 26cea6eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72866d395a259b41dd1568062ae9f525efd098c649089030d7dbe358475b416,PodSandboxId:7a109603aadde1c036974353a80ab912dc9b05b75d5d168569946085f8061861,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt
:1713576670070973648,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4x66x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75da8306-56f8-49bf-a2e7-cf5d4877dc16,},Annotations:map[string]string{io.kubernetes.container.hash: ff495a6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e929f7269b44770eeed9b70db7bf4f6c1f9b43e4d3b6575e87fa13f4bf4a84e,PodSandboxId:faef3224e215842fb808283749bdc3849cd4418d90ea5322b30b53e16c3a9b78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576650473340132
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdd3b00bf785377bddc8fab82d6d99a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6d359f3a3d53d4edd8d3cf64481a586b3ab86d0a85e8ba6801990806ced8348,PodSandboxId:f984207e0813470103daec5dcbd25b7c89c868b8cbb0f729335c5f96f477a78c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576650491322267,Lab
els:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0587f14b2460eaf30de6c91e37166938,},Annotations:map[string]string{io.kubernetes.container.hash: 674a4080,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b85cfa748850504f7c20bbab2dc0000a90942d5a67a20269950485735cb292,PodSandboxId:d84eb958169e26f38279504c69921ef93b6c4df49a25416b8857176ca186a813,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576650457196099,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23ca2077f4fbb1e69605853c38ebffe8,},Annotations:map[string]string{io.kubernetes.container.hash: 293887a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c404e808b8cc8e4567f11f28c04624da4cf3a2f178f7e2146de9374c146072,PodSandboxId:93b7f48a0a1a553f1a77d60546e223d09ffdabedc551991be913d51d8e94b7f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576650449185388,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b975a1148b62f00792b68b5fc13bb267,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d845201-215f-4d31-9cc1-75e4bf164efa name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.402191967Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d9826d6-f629-4597-bb1f-436b73a4e364 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.402324707Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d9826d6-f629-4597-bb1f-436b73a4e364 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.404318185Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0793ec2a-dfaf-4a7f-acf7-d185df697dd5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.405314099Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577216405284935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0793ec2a-dfaf-4a7f-acf7-d185df697dd5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.406434730Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25b6d79a-7048-4ed8-9a0c-f150a21b3bb3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.406593611Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25b6d79a-7048-4ed8-9a0c-f150a21b3bb3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:40:16 embed-certs-269507 crio[729]: time="2024-04-20 01:40:16.406829070Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f2d91a77303a1d1f78754f56e8285673fa3c82912968524268bfb82b6862551,PodSandboxId:1b2078fead88b76fc01ca7f4f074f851b9b2853cf803174f93ac997d33777513,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576671860326870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eee97ab-bb31-4a3d-be80-845b6545e897,},Annotations:map[string]string{io.kubernetes.container.hash: 85082f9b,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58f2094ebcfb7e4ca383a963f4d25356b7e36c999dd36a68860bdfa92b86c086,PodSandboxId:355e142949e4aeedc1349c185fbc3654ab3b0d991137f3acc7d4244c1d1a6207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576671126164065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mpf5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 331105fe-dd08-409f-9b2d-658b958cd1a2,},Annotations:map[string]string{io.kubernetes.container.hash: 36ae1744,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:461315561eb163aacbbaed2f1f488aa988fbe30040ac563863c1971ec2dfa4db,PodSandboxId:a7ab732b7acbf6ac6a83270716540afb22bbf82735a7e9ee6f008b0bd7fce058,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576671069787308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ltzhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
ca2da30-b908-46fc-a028-d43a17c6307e,},Annotations:map[string]string{io.kubernetes.container.hash: 26cea6eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72866d395a259b41dd1568062ae9f525efd098c649089030d7dbe358475b416,PodSandboxId:7a109603aadde1c036974353a80ab912dc9b05b75d5d168569946085f8061861,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt
:1713576670070973648,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4x66x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75da8306-56f8-49bf-a2e7-cf5d4877dc16,},Annotations:map[string]string{io.kubernetes.container.hash: ff495a6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e929f7269b44770eeed9b70db7bf4f6c1f9b43e4d3b6575e87fa13f4bf4a84e,PodSandboxId:faef3224e215842fb808283749bdc3849cd4418d90ea5322b30b53e16c3a9b78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576650473340132
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdd3b00bf785377bddc8fab82d6d99a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6d359f3a3d53d4edd8d3cf64481a586b3ab86d0a85e8ba6801990806ced8348,PodSandboxId:f984207e0813470103daec5dcbd25b7c89c868b8cbb0f729335c5f96f477a78c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576650491322267,Lab
els:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0587f14b2460eaf30de6c91e37166938,},Annotations:map[string]string{io.kubernetes.container.hash: 674a4080,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b85cfa748850504f7c20bbab2dc0000a90942d5a67a20269950485735cb292,PodSandboxId:d84eb958169e26f38279504c69921ef93b6c4df49a25416b8857176ca186a813,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576650457196099,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23ca2077f4fbb1e69605853c38ebffe8,},Annotations:map[string]string{io.kubernetes.container.hash: 293887a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c404e808b8cc8e4567f11f28c04624da4cf3a2f178f7e2146de9374c146072,PodSandboxId:93b7f48a0a1a553f1a77d60546e223d09ffdabedc551991be913d51d8e94b7f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576650449185388,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b975a1148b62f00792b68b5fc13bb267,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25b6d79a-7048-4ed8-9a0c-f150a21b3bb3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1f2d91a77303a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   1b2078fead88b       storage-provisioner
	58f2094ebcfb7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   355e142949e4a       coredns-7db6d8ff4d-mpf5l
	461315561eb16       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   a7ab732b7acbf       coredns-7db6d8ff4d-ltzhp
	a72866d395a25       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   9 minutes ago       Running             kube-proxy                0                   7a109603aadde       kube-proxy-4x66x
	c6d359f3a3d53       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   f984207e08134       etcd-embed-certs-269507
	3e929f7269b44       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   9 minutes ago       Running             kube-controller-manager   2                   faef3224e2158       kube-controller-manager-embed-certs-269507
	d9b85cfa74885       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   9 minutes ago       Running             kube-apiserver            2                   d84eb958169e2       kube-apiserver-embed-certs-269507
	a8c404e808b8c       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   9 minutes ago       Running             kube-scheduler            2                   93b7f48a0a1a5       kube-scheduler-embed-certs-269507
	
	
	==> coredns [461315561eb163aacbbaed2f1f488aa988fbe30040ac563863c1971ec2dfa4db] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [58f2094ebcfb7e4ca383a963f4d25356b7e36c999dd36a68860bdfa92b86c086] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-269507
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-269507
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=embed-certs-269507
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T01_30_56_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 01:30:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-269507
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:40:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 01:36:22 +0000   Sat, 20 Apr 2024 01:30:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 01:36:22 +0000   Sat, 20 Apr 2024 01:30:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 01:36:22 +0000   Sat, 20 Apr 2024 01:30:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 01:36:22 +0000   Sat, 20 Apr 2024 01:30:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.184
	  Hostname:    embed-certs-269507
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 70baa34e90ac40738e978058d7b85f6a
	  System UUID:                70baa34e-90ac-4073-8e97-8058d7b85f6a
	  Boot ID:                    3953aa30-c7ca-4505-9da5-da799418c0c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-ltzhp                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-mpf5l                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-269507                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-embed-certs-269507             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-embed-certs-269507    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-4x66x                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-269507             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-569cc877fc-jwbst               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m6s   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m27s  kubelet          Node embed-certs-269507 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node embed-certs-269507 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node embed-certs-269507 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node embed-certs-269507 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s   node-controller  Node embed-certs-269507 event: Registered Node embed-certs-269507 in Controller
	
	
	==> dmesg <==
	[  +0.052987] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050652] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.831860] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.516649] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.566600] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.299096] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.067945] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.088762] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.261597] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.145621] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.338116] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[  +5.264487] systemd-fstab-generator[811]: Ignoring "noauto" option for root device
	[  +0.064832] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.364859] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +4.644663] kauditd_printk_skb: 97 callbacks suppressed
	[Apr20 01:26] kauditd_printk_skb: 84 callbacks suppressed
	[Apr20 01:30] kauditd_printk_skb: 7 callbacks suppressed
	[  +2.036376] systemd-fstab-generator[3618]: Ignoring "noauto" option for root device
	[  +4.495256] kauditd_printk_skb: 55 callbacks suppressed
	[  +2.087822] systemd-fstab-generator[3943]: Ignoring "noauto" option for root device
	[Apr20 01:31] systemd-fstab-generator[4141]: Ignoring "noauto" option for root device
	[  +0.123323] kauditd_printk_skb: 14 callbacks suppressed
	[Apr20 01:32] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [c6d359f3a3d53d4edd8d3cf64481a586b3ab86d0a85e8ba6801990806ced8348] <==
	{"level":"info","ts":"2024-04-20T01:30:51.006693Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"dfaeaf2ad25a061e","local-member-id":"bf2ced3b97aa693f","added-peer-id":"bf2ced3b97aa693f","added-peer-peer-urls":["https://192.168.50.184:2380"]}
	{"level":"info","ts":"2024-04-20T01:30:51.037395Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-20T01:30:51.037704Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"bf2ced3b97aa693f","initial-advertise-peer-urls":["https://192.168.50.184:2380"],"listen-peer-urls":["https://192.168.50.184:2380"],"advertise-client-urls":["https://192.168.50.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-20T01:30:51.037761Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-20T01:30:51.037852Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.184:2380"}
	{"level":"info","ts":"2024-04-20T01:30:51.037876Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.184:2380"}
	{"level":"info","ts":"2024-04-20T01:30:51.860066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-20T01:30:51.860126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-20T01:30:51.860161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f received MsgPreVoteResp from bf2ced3b97aa693f at term 1"}
	{"level":"info","ts":"2024-04-20T01:30:51.860175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f became candidate at term 2"}
	{"level":"info","ts":"2024-04-20T01:30:51.86018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f received MsgVoteResp from bf2ced3b97aa693f at term 2"}
	{"level":"info","ts":"2024-04-20T01:30:51.860191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f became leader at term 2"}
	{"level":"info","ts":"2024-04-20T01:30:51.860199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bf2ced3b97aa693f elected leader bf2ced3b97aa693f at term 2"}
	{"level":"info","ts":"2024-04-20T01:30:51.864742Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"bf2ced3b97aa693f","local-member-attributes":"{Name:embed-certs-269507 ClientURLs:[https://192.168.50.184:2379]}","request-path":"/0/members/bf2ced3b97aa693f/attributes","cluster-id":"dfaeaf2ad25a061e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-20T01:30:51.865013Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:30:51.865683Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:30:51.867672Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:30:51.872081Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.184:2379"}
	{"level":"info","ts":"2024-04-20T01:30:51.872208Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dfaeaf2ad25a061e","local-member-id":"bf2ced3b97aa693f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:30:51.872319Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:30:51.872358Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:30:51.875569Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-20T01:30:51.875619Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T01:30:51.877538Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	2024/04/20 01:30:56 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> kernel <==
	 01:40:16 up 14 min,  0 users,  load average: 0.01, 0.08, 0.08
	Linux embed-certs-269507 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d9b85cfa748850504f7c20bbab2dc0000a90942d5a67a20269950485735cb292] <==
	I0420 01:34:12.586806       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:35:53.362032       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:35:53.362292       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0420 01:35:54.363587       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:35:54.363690       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0420 01:35:54.363705       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:35:54.363782       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:35:54.363842       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 01:35:54.364978       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:36:54.363956       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:36:54.364025       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0420 01:36:54.364034       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:36:54.365226       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:36:54.365320       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 01:36:54.365331       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:38:54.365068       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:38:54.365369       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0420 01:38:54.365400       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:38:54.365686       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:38:54.365795       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 01:38:54.366592       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3e929f7269b44770eeed9b70db7bf4f6c1f9b43e4d3b6575e87fa13f4bf4a84e] <==
	I0420 01:34:39.342180       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:35:08.877396       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:35:09.350600       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:35:38.885023       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:35:39.361892       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:36:08.893253       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:36:09.372239       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:36:38.899110       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:36:39.381060       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0420 01:37:02.267933       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="316.192µs"
	E0420 01:37:08.906814       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:37:09.393819       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0420 01:37:17.263805       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="143.77µs"
	E0420 01:37:38.912175       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:37:39.405185       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:38:08.920310       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:38:09.414094       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:38:38.925548       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:38:39.424640       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:39:08.931035       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:39:09.433594       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:39:38.936316       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:39:39.441679       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:40:08.942038       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:40:09.452252       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a72866d395a259b41dd1568062ae9f525efd098c649089030d7dbe358475b416] <==
	I0420 01:31:10.446907       1 server_linux.go:69] "Using iptables proxy"
	I0420 01:31:10.491661       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.184"]
	I0420 01:31:10.606687       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 01:31:10.606749       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 01:31:10.606764       1 server_linux.go:165] "Using iptables Proxier"
	I0420 01:31:10.610661       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 01:31:10.610851       1 server.go:872] "Version info" version="v1.30.0"
	I0420 01:31:10.610867       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:31:10.613085       1 config.go:192] "Starting service config controller"
	I0420 01:31:10.613100       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 01:31:10.613207       1 config.go:101] "Starting endpoint slice config controller"
	I0420 01:31:10.613214       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 01:31:10.614116       1 config.go:319] "Starting node config controller"
	I0420 01:31:10.614125       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 01:31:10.713222       1 shared_informer.go:320] Caches are synced for service config
	I0420 01:31:10.713274       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 01:31:10.714440       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a8c404e808b8cc8e4567f11f28c04624da4cf3a2f178f7e2146de9374c146072] <==
	E0420 01:30:53.387295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0420 01:30:53.387537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0420 01:30:54.210018       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0420 01:30:54.210204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0420 01:30:54.249389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0420 01:30:54.249591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0420 01:30:54.393930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0420 01:30:54.394133       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0420 01:30:54.445254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 01:30:54.445311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 01:30:54.498081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0420 01:30:54.499130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0420 01:30:54.527567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0420 01:30:54.527624       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0420 01:30:54.553415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0420 01:30:54.553628       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0420 01:30:54.555166       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 01:30:54.555292       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 01:30:54.592672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0420 01:30:54.592733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0420 01:30:54.679211       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0420 01:30:54.679241       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0420 01:30:54.692201       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 01:30:54.692325       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0420 01:30:56.973896       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 20 01:37:56 embed-certs-269507 kubelet[3950]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:37:56 embed-certs-269507 kubelet[3950]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:37:56 embed-certs-269507 kubelet[3950]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:37:56 embed-certs-269507 kubelet[3950]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:38:09 embed-certs-269507 kubelet[3950]: E0420 01:38:09.248017    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:38:22 embed-certs-269507 kubelet[3950]: E0420 01:38:22.248612    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:38:36 embed-certs-269507 kubelet[3950]: E0420 01:38:36.249173    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:38:48 embed-certs-269507 kubelet[3950]: E0420 01:38:48.247904    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:38:56 embed-certs-269507 kubelet[3950]: E0420 01:38:56.283247    3950 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:38:56 embed-certs-269507 kubelet[3950]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:38:56 embed-certs-269507 kubelet[3950]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:38:56 embed-certs-269507 kubelet[3950]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:38:56 embed-certs-269507 kubelet[3950]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:39:00 embed-certs-269507 kubelet[3950]: E0420 01:39:00.249139    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:39:11 embed-certs-269507 kubelet[3950]: E0420 01:39:11.248180    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:39:23 embed-certs-269507 kubelet[3950]: E0420 01:39:23.247865    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:39:35 embed-certs-269507 kubelet[3950]: E0420 01:39:35.248043    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:39:48 embed-certs-269507 kubelet[3950]: E0420 01:39:48.250118    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:39:56 embed-certs-269507 kubelet[3950]: E0420 01:39:56.281761    3950 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:39:56 embed-certs-269507 kubelet[3950]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:39:56 embed-certs-269507 kubelet[3950]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:39:56 embed-certs-269507 kubelet[3950]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:39:56 embed-certs-269507 kubelet[3950]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:40:03 embed-certs-269507 kubelet[3950]: E0420 01:40:03.247705    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:40:16 embed-certs-269507 kubelet[3950]: E0420 01:40:16.250885    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	
	
	==> storage-provisioner [1f2d91a77303a1d1f78754f56e8285673fa3c82912968524268bfb82b6862551] <==
	I0420 01:31:11.949711       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0420 01:31:11.971242       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0420 01:31:11.971567       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0420 01:31:11.980867       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0420 01:31:11.981322       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-269507_3dd4b7cc-e3a1-4847-8c7c-340330c2d74c!
	I0420 01:31:11.982313       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"391cfcae-d01f-4568-b68e-09952097c20d", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-269507_3dd4b7cc-e3a1-4847-8c7c-340330c2d74c became leader
	I0420 01:31:12.082437       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-269507_3dd4b7cc-e3a1-4847-8c7c-340330c2d74c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-269507 -n embed-certs-269507
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-269507 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-jwbst
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-269507 describe pod metrics-server-569cc877fc-jwbst
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-269507 describe pod metrics-server-569cc877fc-jwbst: exit status 1 (65.118241ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-jwbst" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-269507 describe pod metrics-server-569cc877fc-jwbst: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0420 01:32:53.152928   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/auto-831611/client.crt: no such file or directory
E0420 01:33:11.657602   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 01:33:17.546775   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:33:30.864878   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 01:33:49.905242   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
E0420 01:33:54.474891   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-338118 -n no-preload-338118
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-20 01:41:48.40417964 +0000 UTC m=+6281.919047250
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-338118 -n no-preload-338118
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-338118 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-338118 logs -n 25: (2.131811019s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-831611                               | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-831611                               | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-172352 | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | disable-driver-mounts-172352                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:17 UTC |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-338118             | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:17 UTC | 20 Apr 24 01:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-907988  | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC | 20 Apr 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC |                     |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-269507            | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC | 20 Apr 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-564860        | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-338118                  | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-907988       | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:30 UTC |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-269507                 | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-564860             | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 01:21:33
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 01:21:33.400343  142411 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:21:33.400444  142411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:21:33.400452  142411 out.go:304] Setting ErrFile to fd 2...
	I0420 01:21:33.400464  142411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:21:33.400681  142411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:21:33.401213  142411 out.go:298] Setting JSON to false
	I0420 01:21:33.402151  142411 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14640,"bootTime":1713561453,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 01:21:33.402214  142411 start.go:139] virtualization: kvm guest
	I0420 01:21:33.404200  142411 out.go:177] * [old-k8s-version-564860] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 01:21:33.405933  142411 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:21:33.407240  142411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:21:33.405946  142411 notify.go:220] Checking for updates...
	I0420 01:21:33.408693  142411 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:21:33.409906  142411 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:21:33.411155  142411 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 01:21:33.412528  142411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:21:33.414062  142411 config.go:182] Loaded profile config "old-k8s-version-564860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:21:33.414460  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:21:33.414524  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:21:33.428987  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37585
	I0420 01:21:33.429348  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:21:33.429850  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:21:33.429873  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:21:33.430178  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:21:33.430370  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:21:33.431825  142411 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0420 01:21:33.432895  142411 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:21:33.433209  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:21:33.433251  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:21:33.447157  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42815
	I0420 01:21:33.447543  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:21:33.448080  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:21:33.448123  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:21:33.448444  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:21:33.448609  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:21:33.481664  142411 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 01:21:33.482784  142411 start.go:297] selected driver: kvm2
	I0420 01:21:33.482796  142411 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-5
64860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:21:33.482903  142411 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:21:33.483572  142411 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:21:33.483646  142411 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 01:21:33.497421  142411 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 01:21:33.497790  142411 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:21:33.497854  142411 cni.go:84] Creating CNI manager for ""
	I0420 01:21:33.497869  142411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:21:33.497915  142411 start.go:340] cluster config:
	{Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:21:33.498027  142411 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:21:33.499624  142411 out.go:177] * Starting "old-k8s-version-564860" primary control-plane node in "old-k8s-version-564860" cluster
	I0420 01:21:33.500874  142411 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:21:33.500901  142411 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0420 01:21:33.500914  142411 cache.go:56] Caching tarball of preloaded images
	I0420 01:21:33.500992  142411 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 01:21:33.501007  142411 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0420 01:21:33.501116  142411 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/config.json ...
	I0420 01:21:33.501613  142411 start.go:360] acquireMachinesLock for old-k8s-version-564860: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:21:35.817529  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:38.889617  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:44.969590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:48.041555  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:54.121550  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:57.193604  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:03.273575  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:06.345487  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:12.425567  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:15.497538  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:21.577563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:24.649534  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:30.729573  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:33.801566  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:39.881590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:42.953591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:49.033641  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:52.105579  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:58.185591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:01.257655  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:07.337585  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:10.409568  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:16.489562  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:19.561602  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:25.641579  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:28.713581  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:34.793618  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:37.865643  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:43.945593  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:47.017561  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:53.097597  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:56.169538  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:02.249561  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:05.321557  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:11.401563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:14.473539  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:20.553591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:23.625573  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:29.705563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:32.777590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:38.857568  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:41.929619  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:48.009565  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:51.081536  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:57.161593  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:25:00.233633  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:25:03.237801  141927 start.go:364] duration metric: took 4m24.096402827s to acquireMachinesLock for "default-k8s-diff-port-907988"
	I0420 01:25:03.237873  141927 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:03.237883  141927 fix.go:54] fixHost starting: 
	I0420 01:25:03.238412  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:03.238453  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:03.254029  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36295
	I0420 01:25:03.254570  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:03.255071  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:25:03.255097  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:03.255474  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:03.255703  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:03.255871  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:25:03.257395  141927 fix.go:112] recreateIfNeeded on default-k8s-diff-port-907988: state=Stopped err=<nil>
	I0420 01:25:03.257430  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	W0420 01:25:03.257577  141927 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:03.259083  141927 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-907988" ...
	I0420 01:25:03.260199  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Start
	I0420 01:25:03.260402  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring networks are active...
	I0420 01:25:03.261176  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring network default is active
	I0420 01:25:03.261553  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring network mk-default-k8s-diff-port-907988 is active
	I0420 01:25:03.262016  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Getting domain xml...
	I0420 01:25:03.262834  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Creating domain...
	I0420 01:25:03.235208  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:03.235275  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:25:03.235620  141746 buildroot.go:166] provisioning hostname "no-preload-338118"
	I0420 01:25:03.235653  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:25:03.235902  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:25:03.237636  141746 machine.go:97] duration metric: took 4m37.412949021s to provisionDockerMachine
	I0420 01:25:03.237677  141746 fix.go:56] duration metric: took 4m37.433896084s for fixHost
	I0420 01:25:03.237685  141746 start.go:83] releasing machines lock for "no-preload-338118", held for 4m37.433927307s
	W0420 01:25:03.237715  141746 start.go:713] error starting host: provision: host is not running
	W0420 01:25:03.237980  141746 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0420 01:25:03.238076  141746 start.go:728] Will try again in 5 seconds ...
	I0420 01:25:04.453535  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting to get IP...
	I0420 01:25:04.454427  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.454803  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.454886  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.454785  143129 retry.go:31] will retry after 205.593849ms: waiting for machine to come up
	I0420 01:25:04.662560  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.663106  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.663133  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.663007  143129 retry.go:31] will retry after 246.821866ms: waiting for machine to come up
	I0420 01:25:04.911578  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.912067  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.912100  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.912014  143129 retry.go:31] will retry after 478.36287ms: waiting for machine to come up
	I0420 01:25:05.391624  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.392018  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.392063  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:05.391965  143129 retry.go:31] will retry after 495.387005ms: waiting for machine to come up
	I0420 01:25:05.888569  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.889093  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.889116  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:05.889009  143129 retry.go:31] will retry after 721.867239ms: waiting for machine to come up
	I0420 01:25:06.613018  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:06.613550  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:06.613583  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:06.613495  143129 retry.go:31] will retry after 724.502229ms: waiting for machine to come up
	I0420 01:25:07.339473  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:07.339924  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:07.339974  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:07.339883  143129 retry.go:31] will retry after 916.936196ms: waiting for machine to come up
	I0420 01:25:08.258657  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:08.259033  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:08.259064  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:08.258981  143129 retry.go:31] will retry after 1.088675043s: waiting for machine to come up
	I0420 01:25:08.239597  141746 start.go:360] acquireMachinesLock for no-preload-338118: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:25:09.349021  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:09.349421  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:09.349453  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:09.349362  143129 retry.go:31] will retry after 1.139610002s: waiting for machine to come up
	I0420 01:25:10.490715  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:10.491162  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:10.491190  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:10.491119  143129 retry.go:31] will retry after 1.625829976s: waiting for machine to come up
	I0420 01:25:12.118751  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:12.119231  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:12.119254  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:12.119184  143129 retry.go:31] will retry after 2.904309002s: waiting for machine to come up
	I0420 01:25:15.025713  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:15.026281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:15.026310  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:15.026227  143129 retry.go:31] will retry after 3.471792967s: waiting for machine to come up
	I0420 01:25:18.500247  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:18.500626  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:18.500679  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:18.500595  143129 retry.go:31] will retry after 4.499766051s: waiting for machine to come up
	I0420 01:25:23.005446  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.005935  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Found IP for machine: 192.168.39.222
	I0420 01:25:23.005956  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Reserving static IP address...
	I0420 01:25:23.005970  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has current primary IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.006453  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-907988", mac: "52:54:00:c7:22:6d", ip: "192.168.39.222"} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.006479  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Reserved static IP address: 192.168.39.222
	I0420 01:25:23.006513  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | skip adding static IP to network mk-default-k8s-diff-port-907988 - found existing host DHCP lease matching {name: "default-k8s-diff-port-907988", mac: "52:54:00:c7:22:6d", ip: "192.168.39.222"}
	I0420 01:25:23.006537  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for SSH to be available...
	I0420 01:25:23.006544  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Getting to WaitForSSH function...
	I0420 01:25:23.009090  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.009505  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.009537  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.009658  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Using SSH client type: external
	I0420 01:25:23.009695  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa (-rw-------)
	I0420 01:25:23.009732  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:25:23.009748  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | About to run SSH command:
	I0420 01:25:23.009766  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | exit 0
	I0420 01:25:23.133489  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | SSH cmd err, output: <nil>: 
	I0420 01:25:23.133940  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetConfigRaw
	I0420 01:25:23.134589  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:23.137340  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.137685  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.137708  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.138000  141927 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/config.json ...
	I0420 01:25:23.138228  141927 machine.go:94] provisionDockerMachine start ...
	I0420 01:25:23.138253  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:23.138461  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.140536  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.140815  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.140841  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.141024  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.141244  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.141450  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.141595  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.141777  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.142053  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.142067  141927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:25:23.249946  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:25:23.249979  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.250250  141927 buildroot.go:166] provisioning hostname "default-k8s-diff-port-907988"
	I0420 01:25:23.250280  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.250483  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.253030  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.253422  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.253456  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.253564  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.253755  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.253978  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.254135  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.254334  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.254504  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.254517  141927 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-907988 && echo "default-k8s-diff-port-907988" | sudo tee /etc/hostname
	I0420 01:25:23.379061  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-907988
	
	I0420 01:25:23.379092  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.381893  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.382249  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.382278  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.382465  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.382666  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.382831  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.382939  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.383118  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.383324  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.383349  141927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-907988' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-907988/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-907988' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:25:23.499869  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:23.499903  141927 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:25:23.499932  141927 buildroot.go:174] setting up certificates
	I0420 01:25:23.499941  141927 provision.go:84] configureAuth start
	I0420 01:25:23.499950  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.500178  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:23.502735  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.503050  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.503085  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.503201  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.505586  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.505924  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.505968  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.506036  141927 provision.go:143] copyHostCerts
	I0420 01:25:23.506136  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:25:23.506150  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:25:23.506233  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:25:23.506386  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:25:23.506396  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:25:23.506444  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:25:23.506525  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:25:23.506536  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:25:23.506569  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:25:23.506640  141927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-907988 san=[127.0.0.1 192.168.39.222 default-k8s-diff-port-907988 localhost minikube]
	I0420 01:25:23.598855  141927 provision.go:177] copyRemoteCerts
	I0420 01:25:23.598930  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:25:23.598967  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.602183  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.602516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.602544  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.602696  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.602903  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.603143  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.603301  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:23.688294  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:25:23.714719  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0420 01:25:23.744530  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:25:23.774733  141927 provision.go:87] duration metric: took 274.778779ms to configureAuth
	I0420 01:25:23.774756  141927 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:25:23.774990  141927 config.go:182] Loaded profile config "default-k8s-diff-port-907988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:25:23.775083  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.777817  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.778179  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.778213  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.778376  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.778596  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.778763  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.778984  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.779167  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.779364  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.779393  141927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:25:24.314463  142057 start.go:364] duration metric: took 4m32.915907541s to acquireMachinesLock for "embed-certs-269507"
	I0420 01:25:24.314618  142057 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:24.314645  142057 fix.go:54] fixHost starting: 
	I0420 01:25:24.315169  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:24.315220  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:24.331820  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43949
	I0420 01:25:24.332243  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:24.332707  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:25:24.332730  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:24.333157  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:24.333371  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:24.333551  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:25:24.335004  142057 fix.go:112] recreateIfNeeded on embed-certs-269507: state=Stopped err=<nil>
	I0420 01:25:24.335044  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	W0420 01:25:24.335211  142057 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:24.337246  142057 out.go:177] * Restarting existing kvm2 VM for "embed-certs-269507" ...
	I0420 01:25:24.056795  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:25:24.056832  141927 machine.go:97] duration metric: took 918.585863ms to provisionDockerMachine
	I0420 01:25:24.056849  141927 start.go:293] postStartSetup for "default-k8s-diff-port-907988" (driver="kvm2")
	I0420 01:25:24.056865  141927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:25:24.056889  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.057250  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:25:24.057281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.060602  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.060992  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.061028  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.061196  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.061422  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.061631  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.061785  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.152109  141927 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:25:24.157292  141927 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:25:24.157330  141927 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:25:24.157397  141927 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:25:24.157490  141927 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:25:24.157606  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:25:24.171039  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:24.201343  141927 start.go:296] duration metric: took 144.476748ms for postStartSetup
	I0420 01:25:24.201383  141927 fix.go:56] duration metric: took 20.963499628s for fixHost
	I0420 01:25:24.201409  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.204283  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.204648  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.204681  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.204842  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.205022  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.205204  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.205411  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.205732  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:24.206255  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:24.206269  141927 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:25:24.314311  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576324.296261493
	
	I0420 01:25:24.314336  141927 fix.go:216] guest clock: 1713576324.296261493
	I0420 01:25:24.314346  141927 fix.go:229] Guest: 2024-04-20 01:25:24.296261493 +0000 UTC Remote: 2024-04-20 01:25:24.201388226 +0000 UTC m=+285.207728057 (delta=94.873267ms)
	I0420 01:25:24.314373  141927 fix.go:200] guest clock delta is within tolerance: 94.873267ms
	I0420 01:25:24.314380  141927 start.go:83] releasing machines lock for "default-k8s-diff-port-907988", held for 21.076529311s
	I0420 01:25:24.314420  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.314699  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:24.317281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.317696  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.317731  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.317858  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318364  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318557  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318664  141927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:25:24.318723  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.318833  141927 ssh_runner.go:195] Run: cat /version.json
	I0420 01:25:24.318862  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.321519  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321572  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321937  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.321968  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321994  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.322011  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.322121  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.322233  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.322323  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.322502  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.322516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.322725  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.322730  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.322871  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.403742  141927 ssh_runner.go:195] Run: systemctl --version
	I0420 01:25:24.429207  141927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:25:24.590621  141927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:25:24.597818  141927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:25:24.597890  141927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:25:24.617031  141927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:25:24.617050  141927 start.go:494] detecting cgroup driver to use...
	I0420 01:25:24.617126  141927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:25:24.643134  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:25:24.658222  141927 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:25:24.658275  141927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:25:24.672409  141927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:25:24.686722  141927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:25:24.810871  141927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:25:24.965702  141927 docker.go:233] disabling docker service ...
	I0420 01:25:24.965765  141927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:25:24.984504  141927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:25:24.999580  141927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:25:25.151023  141927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:25:25.278443  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:25:25.295439  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:25:25.316425  141927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:25:25.316494  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.329052  141927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:25:25.329119  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.342102  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.354831  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.368084  141927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:25:25.380515  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.392952  141927 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.411707  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.423776  141927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:25:25.434175  141927 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:25:25.434234  141927 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:25:25.449180  141927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:25:25.460018  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:25.579669  141927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:25:25.741777  141927 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:25:25.741854  141927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:25:25.747422  141927 start.go:562] Will wait 60s for crictl version
	I0420 01:25:25.747478  141927 ssh_runner.go:195] Run: which crictl
	I0420 01:25:25.752164  141927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:25:25.800400  141927 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:25:25.800491  141927 ssh_runner.go:195] Run: crio --version
	I0420 01:25:25.832099  141927 ssh_runner.go:195] Run: crio --version
	I0420 01:25:25.865692  141927 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:25:24.338547  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Start
	I0420 01:25:24.338743  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring networks are active...
	I0420 01:25:24.339527  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring network default is active
	I0420 01:25:24.340064  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring network mk-embed-certs-269507 is active
	I0420 01:25:24.340520  142057 main.go:141] libmachine: (embed-certs-269507) Getting domain xml...
	I0420 01:25:24.341363  142057 main.go:141] libmachine: (embed-certs-269507) Creating domain...
	I0420 01:25:25.566725  142057 main.go:141] libmachine: (embed-certs-269507) Waiting to get IP...
	I0420 01:25:25.567704  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:25.568195  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:25.568263  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:25.568160  143271 retry.go:31] will retry after 229.672507ms: waiting for machine to come up
	I0420 01:25:25.799515  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:25.799964  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:25.799994  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:25.799916  143271 retry.go:31] will retry after 352.048372ms: waiting for machine to come up
	I0420 01:25:26.153710  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:26.154217  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:26.154245  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:26.154159  143271 retry.go:31] will retry after 451.404487ms: waiting for machine to come up
	I0420 01:25:25.867283  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:25.870225  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:25.870725  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:25.870748  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:25.871001  141927 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0420 01:25:25.875986  141927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:25.890923  141927 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-907988 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907
988 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:25:25.891043  141927 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:25:25.891088  141927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:25.934665  141927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:25:25.934743  141927 ssh_runner.go:195] Run: which lz4
	I0420 01:25:25.939157  141927 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:25:25.943759  141927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:25:25.943788  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:25:27.674416  141927 crio.go:462] duration metric: took 1.735279369s to copy over tarball
	I0420 01:25:27.674484  141927 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:25:26.607751  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:26.608327  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:26.608362  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:26.608273  143271 retry.go:31] will retry after 548.149542ms: waiting for machine to come up
	I0420 01:25:27.157746  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:27.158193  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:27.158220  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:27.158158  143271 retry.go:31] will retry after 543.066807ms: waiting for machine to come up
	I0420 01:25:27.702417  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:27.702812  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:27.702842  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:27.702778  143271 retry.go:31] will retry after 801.842999ms: waiting for machine to come up
	I0420 01:25:28.505673  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:28.506233  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:28.506264  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:28.506169  143271 retry.go:31] will retry after 1.176665861s: waiting for machine to come up
	I0420 01:25:29.684134  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:29.684642  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:29.684676  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:29.684582  143271 retry.go:31] will retry after 1.09397916s: waiting for machine to come up
	I0420 01:25:30.780467  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:30.780962  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:30.780987  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:30.780924  143271 retry.go:31] will retry after 1.560706704s: waiting for machine to come up
	I0420 01:25:30.280138  141927 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.605620888s)
	I0420 01:25:30.280235  141927 crio.go:469] duration metric: took 2.605784372s to extract the tarball
	I0420 01:25:30.280269  141927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:25:30.323590  141927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:30.384053  141927 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:25:30.384083  141927 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:25:30.384094  141927 kubeadm.go:928] updating node { 192.168.39.222 8444 v1.30.0 crio true true} ...
	I0420 01:25:30.384258  141927 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-907988 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907988 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:25:30.384347  141927 ssh_runner.go:195] Run: crio config
	I0420 01:25:30.431033  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:25:30.431059  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:30.431074  141927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:25:30.431094  141927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.222 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-907988 NodeName:default-k8s-diff-port-907988 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:25:30.431267  141927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.222
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-907988"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:25:30.431327  141927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:25:30.444735  141927 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:25:30.444807  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:25:30.457543  141927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0420 01:25:30.477858  141927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:25:30.497632  141927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0420 01:25:30.518062  141927 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I0420 01:25:30.522820  141927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:30.538677  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:30.686290  141927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:25:30.721316  141927 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988 for IP: 192.168.39.222
	I0420 01:25:30.721342  141927 certs.go:194] generating shared ca certs ...
	I0420 01:25:30.721373  141927 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:25:30.721607  141927 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:25:30.721664  141927 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:25:30.721679  141927 certs.go:256] generating profile certs ...
	I0420 01:25:30.721789  141927 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/client.key
	I0420 01:25:30.721873  141927 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.key.b8de10ae
	I0420 01:25:30.721912  141927 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.key
	I0420 01:25:30.722019  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:25:30.722052  141927 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:25:30.722067  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:25:30.722094  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:25:30.722122  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:25:30.722144  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:25:30.722189  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:30.723048  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:25:30.762666  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:25:30.800218  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:25:30.849282  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:25:30.893355  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0420 01:25:30.924642  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:25:30.956734  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:25:30.986491  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:25:31.015876  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:25:31.043860  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:25:31.073822  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:25:31.100731  141927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:25:31.119908  141927 ssh_runner.go:195] Run: openssl version
	I0420 01:25:31.128209  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:25:31.140164  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.145371  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.145432  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.151726  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:25:31.163371  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:25:31.175115  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.180237  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.180286  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.186548  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:25:31.198703  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:25:31.211529  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.217258  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.217326  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.223822  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:25:31.236363  141927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:25:31.241793  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:25:31.250826  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:25:31.259850  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:25:31.267387  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:25:31.274477  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:25:31.281452  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:25:31.287980  141927 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-907988 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907988
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:25:31.288094  141927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:25:31.288159  141927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:31.344552  141927 cri.go:89] found id: ""
	I0420 01:25:31.344646  141927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:25:31.357049  141927 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:25:31.357075  141927 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:25:31.357081  141927 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:25:31.357147  141927 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:25:31.368636  141927 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:25:31.370055  141927 kubeconfig.go:125] found "default-k8s-diff-port-907988" server: "https://192.168.39.222:8444"
	I0420 01:25:31.373063  141927 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:25:31.384821  141927 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.222
	I0420 01:25:31.384861  141927 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:25:31.384876  141927 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:25:31.384946  141927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:31.432801  141927 cri.go:89] found id: ""
	I0420 01:25:31.432902  141927 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:25:31.458842  141927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:25:31.472706  141927 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:25:31.472728  141927 kubeadm.go:156] found existing configuration files:
	
	I0420 01:25:31.472780  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0420 01:25:31.486221  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:25:31.486276  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:25:31.500036  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0420 01:25:31.510180  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:25:31.510237  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:25:31.520560  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0420 01:25:31.530333  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:25:31.530387  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:25:31.541053  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0420 01:25:31.551200  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:25:31.551257  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:25:31.561364  141927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:25:31.572967  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:31.690537  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.319980  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.546554  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.631937  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.729738  141927 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:25:32.729838  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.230769  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.730452  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.807772  141927 api_server.go:72] duration metric: took 1.07803345s to wait for apiserver process to appear ...
	I0420 01:25:33.807805  141927 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:25:33.807829  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:33.808551  141927 api_server.go:269] stopped: https://192.168.39.222:8444/healthz: Get "https://192.168.39.222:8444/healthz": dial tcp 192.168.39.222:8444: connect: connection refused
	I0420 01:25:32.342951  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:32.343373  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:32.343420  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:32.343352  143271 retry.go:31] will retry after 1.871100952s: waiting for machine to come up
	I0420 01:25:34.215884  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:34.216313  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:34.216341  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:34.216253  143271 retry.go:31] will retry after 2.017753728s: waiting for machine to come up
	I0420 01:25:36.237296  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:36.237906  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:36.237936  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:36.237856  143271 retry.go:31] will retry after 3.431912056s: waiting for machine to come up
	I0420 01:25:34.308465  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.098889  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:37.098928  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:37.098945  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.149496  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:37.149534  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:37.308936  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.313975  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:37.314005  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:37.808680  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.818747  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:37.818784  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:38.307905  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:38.318528  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:38.318563  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:38.808127  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:38.816135  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:38.816167  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:39.307985  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:39.313712  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:39.313753  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:39.808225  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:39.812825  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:39.812858  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:40.308366  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:40.312930  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:40.312970  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:40.808320  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:40.812979  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 200:
	ok
	I0420 01:25:40.820265  141927 api_server.go:141] control plane version: v1.30.0
	I0420 01:25:40.820289  141927 api_server.go:131] duration metric: took 7.012476869s to wait for apiserver health ...
	I0420 01:25:40.820298  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:25:40.820304  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:40.822367  141927 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:25:39.671070  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:39.671556  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:39.671614  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:39.671502  143271 retry.go:31] will retry after 3.954438708s: waiting for machine to come up
	I0420 01:25:40.823843  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:25:40.837960  141927 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:25:40.858294  141927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:25:40.867542  141927 system_pods.go:59] 8 kube-system pods found
	I0420 01:25:40.867577  141927 system_pods.go:61] "coredns-7db6d8ff4d-7v886" [0e0b3a5f-041a-4bbc-94aa-c9571a8761ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:25:40.867584  141927 system_pods.go:61] "etcd-default-k8s-diff-port-907988" [88f687c4-8865-4fe6-92f1-448cfde6117c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:25:40.867590  141927 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-907988" [2c9f0d90-35c6-45ad-b9b1-9504c55a1e18] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:25:40.867597  141927 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-907988" [949ce449-06b4-4650-8ba0-7567637d6aec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:25:40.867604  141927 system_pods.go:61] "kube-proxy-dg6xn" [1124d9e8-41aa-44a9-8a4a-eafd2cd6c6c9] Running
	I0420 01:25:40.867626  141927 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-907988" [df93de11-c23d-4f5d-afd4-1af7928933fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0420 01:25:40.867640  141927 system_pods.go:61] "metrics-server-569cc877fc-rqqlt" [2c7d91c3-fce8-4603-a7be-8d9b415d71f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:25:40.867647  141927 system_pods.go:61] "storage-provisioner" [af4dc99d-feef-4c24-852a-4c8cad22dd7d] Running
	I0420 01:25:40.867654  141927 system_pods.go:74] duration metric: took 9.33485ms to wait for pod list to return data ...
	I0420 01:25:40.867670  141927 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:25:40.871045  141927 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:25:40.871067  141927 node_conditions.go:123] node cpu capacity is 2
	I0420 01:25:40.871078  141927 node_conditions.go:105] duration metric: took 3.402743ms to run NodePressure ...
	I0420 01:25:40.871094  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:41.142438  141927 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:25:41.151801  141927 kubeadm.go:733] kubelet initialised
	I0420 01:25:41.151822  141927 kubeadm.go:734] duration metric: took 9.359538ms waiting for restarted kubelet to initialise ...
	I0420 01:25:41.151830  141927 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:25:41.160583  141927 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.169184  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.169214  141927 pod_ready.go:81] duration metric: took 8.596607ms for pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.169226  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.169234  141927 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.175518  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.175544  141927 pod_ready.go:81] duration metric: took 6.298273ms for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.175558  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.175567  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.189038  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.189062  141927 pod_ready.go:81] duration metric: took 13.484198ms for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.189072  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.189078  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.261162  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.261191  141927 pod_ready.go:81] duration metric: took 72.106763ms for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.261203  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.261210  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dg6xn" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.662532  141927 pod_ready.go:92] pod "kube-proxy-dg6xn" in "kube-system" namespace has status "Ready":"True"
	I0420 01:25:41.662553  141927 pod_ready.go:81] duration metric: took 401.337101ms for pod "kube-proxy-dg6xn" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.662562  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:43.670281  141927 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:45.122924  142411 start.go:364] duration metric: took 4m11.621269498s to acquireMachinesLock for "old-k8s-version-564860"
	I0420 01:25:45.122996  142411 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:45.123018  142411 fix.go:54] fixHost starting: 
	I0420 01:25:45.123538  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:45.123581  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:45.141340  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0420 01:25:45.141873  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:45.142555  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:25:45.142592  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:45.142979  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:45.143234  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:25:45.143426  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetState
	I0420 01:25:45.145067  142411 fix.go:112] recreateIfNeeded on old-k8s-version-564860: state=Stopped err=<nil>
	I0420 01:25:45.145114  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	W0420 01:25:45.145289  142411 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:45.147498  142411 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-564860" ...
	I0420 01:25:43.630616  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.631126  142057 main.go:141] libmachine: (embed-certs-269507) Found IP for machine: 192.168.50.184
	I0420 01:25:43.631159  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has current primary IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.631173  142057 main.go:141] libmachine: (embed-certs-269507) Reserving static IP address...
	I0420 01:25:43.631625  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "embed-certs-269507", mac: "52:54:00:5d:0f:ba", ip: "192.168.50.184"} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.631677  142057 main.go:141] libmachine: (embed-certs-269507) DBG | skip adding static IP to network mk-embed-certs-269507 - found existing host DHCP lease matching {name: "embed-certs-269507", mac: "52:54:00:5d:0f:ba", ip: "192.168.50.184"}
	I0420 01:25:43.631692  142057 main.go:141] libmachine: (embed-certs-269507) Reserved static IP address: 192.168.50.184
	I0420 01:25:43.631710  142057 main.go:141] libmachine: (embed-certs-269507) Waiting for SSH to be available...
	I0420 01:25:43.631731  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Getting to WaitForSSH function...
	I0420 01:25:43.634292  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.634614  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.634650  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.634833  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Using SSH client type: external
	I0420 01:25:43.634883  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa (-rw-------)
	I0420 01:25:43.634916  142057 main.go:141] libmachine: (embed-certs-269507) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:25:43.634935  142057 main.go:141] libmachine: (embed-certs-269507) DBG | About to run SSH command:
	I0420 01:25:43.634949  142057 main.go:141] libmachine: (embed-certs-269507) DBG | exit 0
	I0420 01:25:43.757712  142057 main.go:141] libmachine: (embed-certs-269507) DBG | SSH cmd err, output: <nil>: 
	I0420 01:25:43.758118  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetConfigRaw
	I0420 01:25:43.758820  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:43.761626  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.762007  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.762083  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.762328  142057 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/config.json ...
	I0420 01:25:43.762556  142057 machine.go:94] provisionDockerMachine start ...
	I0420 01:25:43.762575  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:43.762827  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:43.765841  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.766277  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.766304  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.766461  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:43.766636  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.766766  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.766884  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:43.767111  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:43.767371  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:43.767386  142057 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:25:43.874709  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:25:43.874741  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:43.875018  142057 buildroot.go:166] provisioning hostname "embed-certs-269507"
	I0420 01:25:43.875052  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:43.875265  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:43.878226  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.878645  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.878675  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.878767  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:43.878976  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.879120  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.879246  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:43.879375  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:43.879585  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:43.879613  142057 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-269507 && echo "embed-certs-269507" | sudo tee /etc/hostname
	I0420 01:25:44.003458  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-269507
	
	I0420 01:25:44.003502  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.006277  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.006706  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.006745  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.006922  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.007227  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.007417  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.007604  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.007772  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:44.007959  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:44.007979  142057 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-269507' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-269507/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-269507' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:25:44.124457  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:44.124494  142057 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:25:44.124516  142057 buildroot.go:174] setting up certificates
	I0420 01:25:44.124526  142057 provision.go:84] configureAuth start
	I0420 01:25:44.124537  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:44.124850  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:44.127589  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.127958  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.127980  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.128196  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.130485  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.130792  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.130830  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.130992  142057 provision.go:143] copyHostCerts
	I0420 01:25:44.131060  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:25:44.131075  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:25:44.131132  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:25:44.131237  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:25:44.131246  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:25:44.131266  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:25:44.131326  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:25:44.131333  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:25:44.131349  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:25:44.131397  142057 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.embed-certs-269507 san=[127.0.0.1 192.168.50.184 embed-certs-269507 localhost minikube]
	I0420 01:25:44.404404  142057 provision.go:177] copyRemoteCerts
	I0420 01:25:44.404469  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:25:44.404498  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.407318  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.407650  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.407683  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.407850  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.408033  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.408182  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.408307  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:44.498069  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:25:44.524979  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0420 01:25:44.553537  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 01:25:44.580307  142057 provision.go:87] duration metric: took 455.767679ms to configureAuth
	I0420 01:25:44.580332  142057 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:25:44.580609  142057 config.go:182] Loaded profile config "embed-certs-269507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:25:44.580722  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.583352  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.583728  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.583761  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.583978  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.584205  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.584383  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.584516  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.584715  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:44.584905  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:44.584926  142057 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:25:44.882565  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:25:44.882599  142057 machine.go:97] duration metric: took 1.120028956s to provisionDockerMachine
	I0420 01:25:44.882612  142057 start.go:293] postStartSetup for "embed-certs-269507" (driver="kvm2")
	I0420 01:25:44.882622  142057 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:25:44.882639  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:44.882971  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:25:44.883012  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.885829  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.886181  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.886208  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.886372  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.886598  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.886761  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.886915  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:44.972428  142057 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:25:44.977228  142057 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:25:44.977257  142057 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:25:44.977344  142057 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:25:44.977435  142057 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:25:44.977552  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:25:44.987372  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:45.014435  142057 start.go:296] duration metric: took 131.807177ms for postStartSetup
	I0420 01:25:45.014484  142057 fix.go:56] duration metric: took 20.699839101s for fixHost
	I0420 01:25:45.014512  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.017361  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.017768  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.017795  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.017943  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.018150  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.018302  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.018421  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.018643  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:45.018815  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:45.018827  142057 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:25:45.122766  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576345.101529100
	
	I0420 01:25:45.122788  142057 fix.go:216] guest clock: 1713576345.101529100
	I0420 01:25:45.122796  142057 fix.go:229] Guest: 2024-04-20 01:25:45.1015291 +0000 UTC Remote: 2024-04-20 01:25:45.014489313 +0000 UTC m=+293.764572165 (delta=87.039787ms)
	I0420 01:25:45.122823  142057 fix.go:200] guest clock delta is within tolerance: 87.039787ms
	I0420 01:25:45.122828  142057 start.go:83] releasing machines lock for "embed-certs-269507", held for 20.808247089s
	I0420 01:25:45.122851  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.123156  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:45.125956  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.126377  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.126408  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.126536  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127059  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127264  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127349  142057 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:25:45.127404  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.127470  142057 ssh_runner.go:195] Run: cat /version.json
	I0420 01:25:45.127497  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.130071  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130393  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130427  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.130447  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130727  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.130825  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.130854  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130932  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.131041  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.131115  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.131220  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:45.131301  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.131451  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.131597  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:45.211824  142057 ssh_runner.go:195] Run: systemctl --version
	I0420 01:25:45.236425  142057 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:25:45.383069  142057 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:25:45.391072  142057 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:25:45.391159  142057 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:25:45.410287  142057 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:25:45.410313  142057 start.go:494] detecting cgroup driver to use...
	I0420 01:25:45.410395  142057 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:25:45.433663  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:25:45.452933  142057 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:25:45.452999  142057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:25:45.473208  142057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:25:45.493261  142057 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:25:45.650111  142057 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:25:45.847482  142057 docker.go:233] disabling docker service ...
	I0420 01:25:45.847559  142057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:25:45.871032  142057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:25:45.892747  142057 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:25:46.076222  142057 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:25:46.218078  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:25:46.236006  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:25:46.259279  142057 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:25:46.259363  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.272573  142057 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:25:46.272647  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.286468  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.298708  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.313197  142057 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:25:46.332844  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.345531  142057 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.367686  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.379702  142057 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:25:46.390491  142057 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:25:46.390558  142057 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:25:46.406027  142057 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:25:46.417370  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:46.543690  142057 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:25:46.725507  142057 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:25:46.725599  142057 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:25:46.734173  142057 start.go:562] Will wait 60s for crictl version
	I0420 01:25:46.734246  142057 ssh_runner.go:195] Run: which crictl
	I0420 01:25:46.740381  142057 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:25:46.801341  142057 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:25:46.801431  142057 ssh_runner.go:195] Run: crio --version
	I0420 01:25:46.843121  142057 ssh_runner.go:195] Run: crio --version
	I0420 01:25:46.889958  142057 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:25:45.148885  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .Start
	I0420 01:25:45.149115  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring networks are active...
	I0420 01:25:45.149856  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring network default is active
	I0420 01:25:45.150205  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring network mk-old-k8s-version-564860 is active
	I0420 01:25:45.150615  142411 main.go:141] libmachine: (old-k8s-version-564860) Getting domain xml...
	I0420 01:25:45.151296  142411 main.go:141] libmachine: (old-k8s-version-564860) Creating domain...
	I0420 01:25:46.465532  142411 main.go:141] libmachine: (old-k8s-version-564860) Waiting to get IP...
	I0420 01:25:46.466816  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.467306  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.467383  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.467288  143434 retry.go:31] will retry after 265.980653ms: waiting for machine to come up
	I0420 01:25:46.735144  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.735676  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.735700  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.735627  143434 retry.go:31] will retry after 254.534112ms: waiting for machine to come up
	I0420 01:25:46.992222  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.992707  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.992738  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.992621  143434 retry.go:31] will retry after 434.179962ms: waiting for machine to come up
	I0420 01:25:47.428397  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:47.428949  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:47.428987  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:47.428899  143434 retry.go:31] will retry after 533.143168ms: waiting for machine to come up
	I0420 01:25:47.963467  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:47.964008  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:47.964035  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:47.963957  143434 retry.go:31] will retry after 601.536298ms: waiting for machine to come up
	I0420 01:25:45.675159  141927 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:48.175457  141927 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:25:48.175487  141927 pod_ready.go:81] duration metric: took 6.512916578s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:48.175499  141927 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:46.891233  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:46.894647  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:46.895107  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:46.895170  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:46.895398  142057 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0420 01:25:46.900604  142057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:46.920025  142057 kubeadm.go:877] updating cluster {Name:embed-certs-269507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:25:46.920184  142057 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:25:46.920247  142057 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:46.967086  142057 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:25:46.967171  142057 ssh_runner.go:195] Run: which lz4
	I0420 01:25:46.973391  142057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:25:46.979210  142057 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:25:46.979241  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:25:48.806615  142057 crio.go:462] duration metric: took 1.83326325s to copy over tarball
	I0420 01:25:48.806701  142057 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:25:48.567922  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:48.568436  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:48.568469  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:48.568387  143434 retry.go:31] will retry after 853.809635ms: waiting for machine to come up
	I0420 01:25:49.423590  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:49.424154  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:49.424178  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:49.424099  143434 retry.go:31] will retry after 1.096859163s: waiting for machine to come up
	I0420 01:25:50.522906  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:50.523406  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:50.523436  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:50.523350  143434 retry.go:31] will retry after 983.057252ms: waiting for machine to come up
	I0420 01:25:51.508033  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:51.508557  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:51.508596  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:51.508497  143434 retry.go:31] will retry after 1.463876638s: waiting for machine to come up
	I0420 01:25:52.974032  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:52.974508  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:52.974536  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:52.974459  143434 retry.go:31] will retry after 1.859889372s: waiting for machine to come up
	I0420 01:25:50.183489  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:53.262055  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:51.389972  142057 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.583237436s)
	I0420 01:25:51.390002  142057 crio.go:469] duration metric: took 2.583356337s to extract the tarball
	I0420 01:25:51.390010  142057 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:25:51.434741  142057 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:51.489945  142057 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:25:51.489974  142057 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:25:51.489984  142057 kubeadm.go:928] updating node { 192.168.50.184 8443 v1.30.0 crio true true} ...
	I0420 01:25:51.490126  142057 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-269507 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:25:51.490226  142057 ssh_runner.go:195] Run: crio config
	I0420 01:25:51.548273  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:25:51.548299  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:51.548316  142057 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:25:51.548356  142057 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-269507 NodeName:embed-certs-269507 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:25:51.548534  142057 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-269507"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:25:51.548614  142057 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:25:51.560359  142057 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:25:51.560428  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:25:51.571609  142057 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0420 01:25:51.594462  142057 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:25:51.621417  142057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0420 01:25:51.649250  142057 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0420 01:25:51.655304  142057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:51.675476  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:51.809652  142057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:25:51.829341  142057 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507 for IP: 192.168.50.184
	I0420 01:25:51.829405  142057 certs.go:194] generating shared ca certs ...
	I0420 01:25:51.829430  142057 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:25:51.829627  142057 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:25:51.829687  142057 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:25:51.829697  142057 certs.go:256] generating profile certs ...
	I0420 01:25:51.829823  142057 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/client.key
	I0420 01:25:52.088423  142057 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.key.c1e63643
	I0420 01:25:52.088542  142057 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.key
	I0420 01:25:52.088748  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:25:52.088811  142057 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:25:52.088841  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:25:52.088880  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:25:52.088919  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:25:52.088959  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:25:52.089020  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:52.090046  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:25:52.130739  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:25:52.163426  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:25:52.202470  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:25:52.232070  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0420 01:25:52.265640  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:25:52.305670  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:25:52.336788  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:25:52.371507  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:25:52.403015  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:25:52.433761  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:25:52.461373  142057 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:25:52.480675  142057 ssh_runner.go:195] Run: openssl version
	I0420 01:25:52.486965  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:25:52.499466  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.506355  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.506409  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.514625  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:25:52.530107  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:25:52.544051  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.549426  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.549495  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.555960  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:25:52.569332  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:25:52.583057  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.588323  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.588390  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.594622  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:25:52.607021  142057 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:25:52.612270  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:25:52.619182  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:25:52.626168  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:25:52.633276  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:25:52.639840  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:25:52.646478  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:25:52.652982  142057 kubeadm.go:391] StartCluster: {Name:embed-certs-269507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:25:52.653130  142057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:25:52.653182  142057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:52.699113  142057 cri.go:89] found id: ""
	I0420 01:25:52.699200  142057 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:25:52.712835  142057 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:25:52.712859  142057 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:25:52.712867  142057 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:25:52.712914  142057 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:25:52.726130  142057 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:25:52.727354  142057 kubeconfig.go:125] found "embed-certs-269507" server: "https://192.168.50.184:8443"
	I0420 01:25:52.729600  142057 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:25:52.744185  142057 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.184
	I0420 01:25:52.744217  142057 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:25:52.744231  142057 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:25:52.744292  142057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:52.792889  142057 cri.go:89] found id: ""
	I0420 01:25:52.792967  142057 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:25:52.812771  142057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:25:52.824478  142057 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:25:52.824495  142057 kubeadm.go:156] found existing configuration files:
	
	I0420 01:25:52.824533  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:25:52.835612  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:25:52.835679  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:25:52.847089  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:25:52.858049  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:25:52.858126  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:25:52.872787  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:25:52.886588  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:25:52.886649  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:25:52.899467  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:25:52.910884  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:25:52.910942  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:25:52.922217  142057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:25:52.933432  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:53.108167  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.044709  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.257949  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.327450  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.426738  142057 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:25:54.426849  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:54.926955  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:55.427198  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:55.489075  142057 api_server.go:72] duration metric: took 1.06233038s to wait for apiserver process to appear ...
	I0420 01:25:55.489109  142057 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:25:55.489137  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:55.489682  142057 api_server.go:269] stopped: https://192.168.50.184:8443/healthz: Get "https://192.168.50.184:8443/healthz": dial tcp 192.168.50.184:8443: connect: connection refused
	I0420 01:25:55.989278  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:54.836137  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:54.836639  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:54.836670  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:54.836584  143434 retry.go:31] will retry after 2.172259495s: waiting for machine to come up
	I0420 01:25:57.011412  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:57.011810  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:57.011840  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:57.011782  143434 retry.go:31] will retry after 2.279304552s: waiting for machine to come up
	I0420 01:25:55.684205  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:57.686312  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:58.334562  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:58.334594  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:58.334614  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.344779  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:58.344814  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:58.490111  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.499158  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:58.499194  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:58.989417  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.996443  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:58.996477  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:59.489585  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:59.496235  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:59.496271  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:59.989892  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:59.994154  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0420 01:26:00.000276  142057 api_server.go:141] control plane version: v1.30.0
	I0420 01:26:00.000301  142057 api_server.go:131] duration metric: took 4.511183577s to wait for apiserver health ...
	I0420 01:26:00.000311  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:26:00.000317  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:00.002217  142057 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:26:00.003646  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:26:00.018114  142057 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:26:00.040866  142057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:26:00.050481  142057 system_pods.go:59] 8 kube-system pods found
	I0420 01:26:00.050514  142057 system_pods.go:61] "coredns-7db6d8ff4d-79bzc" [af5f0029-75b5-4131-8c60-5a4fee48c618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:26:00.050524  142057 system_pods.go:61] "etcd-embed-certs-269507" [d6dfc301-0cfb-4bfb-99f7-948b77b38f53] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:26:00.050533  142057 system_pods.go:61] "kube-apiserver-embed-certs-269507" [915deee2-f571-4337-bcdc-07f40d06b9c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:26:00.050539  142057 system_pods.go:61] "kube-controller-manager-embed-certs-269507" [21c885b0-6d1b-4593-87f3-141e512af7dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:26:00.050545  142057 system_pods.go:61] "kube-proxy-crzk6" [d5972e9a-15cd-4b62-90d5-c10bdfa20989] Running
	I0420 01:26:00.050553  142057 system_pods.go:61] "kube-scheduler-embed-certs-269507" [1e556102-d4c9-494c-baf2-ab7e62d7d1e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0420 01:26:00.050559  142057 system_pods.go:61] "metrics-server-569cc877fc-8s79l" [1dc06e4a-3f47-4ef1-8757-81262c52fe55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:26:00.050583  142057 system_pods.go:61] "storage-provisioner" [f7b03907-0042-48d8-981b-1b8e665d58e7] Running
	I0420 01:26:00.050600  142057 system_pods.go:74] duration metric: took 9.699819ms to wait for pod list to return data ...
	I0420 01:26:00.050608  142057 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:26:00.053915  142057 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:26:00.053964  142057 node_conditions.go:123] node cpu capacity is 2
	I0420 01:26:00.053975  142057 node_conditions.go:105] duration metric: took 3.363162ms to run NodePressure ...
	I0420 01:26:00.053994  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:00.327736  142057 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:26:00.332409  142057 kubeadm.go:733] kubelet initialised
	I0420 01:26:00.332434  142057 kubeadm.go:734] duration metric: took 4.671334ms waiting for restarted kubelet to initialise ...
	I0420 01:26:00.332446  142057 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:26:00.338296  142057 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:59.292382  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:59.292905  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:59.292939  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:59.292852  143434 retry.go:31] will retry after 4.056028382s: waiting for machine to come up
	I0420 01:26:03.350591  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:03.351022  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:26:03.351047  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:26:03.350978  143434 retry.go:31] will retry after 5.38819739s: waiting for machine to come up
	I0420 01:26:00.184338  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:02.684685  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:02.345607  142057 pod_ready.go:102] pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:03.850887  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:03.850915  142057 pod_ready.go:81] duration metric: took 3.512592061s for pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:03.850929  142057 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:05.857665  142057 pod_ready.go:102] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:05.183082  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:07.682906  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:10.191165  141746 start.go:364] duration metric: took 1m1.9514957s to acquireMachinesLock for "no-preload-338118"
	I0420 01:26:10.191222  141746 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:26:10.191235  141746 fix.go:54] fixHost starting: 
	I0420 01:26:10.191624  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:26:10.191668  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:26:10.212169  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34829
	I0420 01:26:10.212568  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:26:10.213074  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:26:10.213120  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:26:10.213524  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:26:10.213755  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:10.213957  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:26:10.215578  141746 fix.go:112] recreateIfNeeded on no-preload-338118: state=Stopped err=<nil>
	I0420 01:26:10.215604  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	W0420 01:26:10.215788  141746 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:26:10.217632  141746 out.go:177] * Restarting existing kvm2 VM for "no-preload-338118" ...
	I0420 01:26:10.218915  141746 main.go:141] libmachine: (no-preload-338118) Calling .Start
	I0420 01:26:10.219094  141746 main.go:141] libmachine: (no-preload-338118) Ensuring networks are active...
	I0420 01:26:10.219820  141746 main.go:141] libmachine: (no-preload-338118) Ensuring network default is active
	I0420 01:26:10.220181  141746 main.go:141] libmachine: (no-preload-338118) Ensuring network mk-no-preload-338118 is active
	I0420 01:26:10.220584  141746 main.go:141] libmachine: (no-preload-338118) Getting domain xml...
	I0420 01:26:10.221275  141746 main.go:141] libmachine: (no-preload-338118) Creating domain...
	I0420 01:26:08.363522  142057 pod_ready.go:102] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:09.858701  142057 pod_ready.go:92] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:09.858731  142057 pod_ready.go:81] duration metric: took 6.007793209s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:09.858742  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:08.743367  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.743867  142411 main.go:141] libmachine: (old-k8s-version-564860) Found IP for machine: 192.168.61.91
	I0420 01:26:08.743896  142411 main.go:141] libmachine: (old-k8s-version-564860) Reserving static IP address...
	I0420 01:26:08.743914  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has current primary IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.744294  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "old-k8s-version-564860", mac: "52:54:00:9d:63:09", ip: "192.168.61.91"} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.744324  142411 main.go:141] libmachine: (old-k8s-version-564860) Reserved static IP address: 192.168.61.91
	I0420 01:26:08.744344  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | skip adding static IP to network mk-old-k8s-version-564860 - found existing host DHCP lease matching {name: "old-k8s-version-564860", mac: "52:54:00:9d:63:09", ip: "192.168.61.91"}
	I0420 01:26:08.744368  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Getting to WaitForSSH function...
	I0420 01:26:08.744387  142411 main.go:141] libmachine: (old-k8s-version-564860) Waiting for SSH to be available...
	I0420 01:26:08.746714  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.747119  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.747155  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.747278  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Using SSH client type: external
	I0420 01:26:08.747314  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa (-rw-------)
	I0420 01:26:08.747346  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:26:08.747359  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | About to run SSH command:
	I0420 01:26:08.747373  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | exit 0
	I0420 01:26:08.877633  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | SSH cmd err, output: <nil>: 
	I0420 01:26:08.878016  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetConfigRaw
	I0420 01:26:08.878715  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:08.881556  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.881982  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.882028  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.882326  142411 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/config.json ...
	I0420 01:26:08.882586  142411 machine.go:94] provisionDockerMachine start ...
	I0420 01:26:08.882613  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:08.882853  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:08.885133  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.885479  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.885510  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.885647  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:08.885843  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:08.886029  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:08.886192  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:08.886403  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:08.886642  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:08.886657  142411 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:26:09.006625  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:26:09.006655  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.006914  142411 buildroot.go:166] provisioning hostname "old-k8s-version-564860"
	I0420 01:26:09.006940  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.007144  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.010016  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.010349  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.010374  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.010597  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.010841  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.011040  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.011235  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.011439  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.011682  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.011718  142411 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-564860 && echo "old-k8s-version-564860" | sudo tee /etc/hostname
	I0420 01:26:09.155581  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-564860
	
	I0420 01:26:09.155612  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.158583  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.159021  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.159068  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.159285  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.159519  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.159747  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.159933  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.160128  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.160362  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.160390  142411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-564860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-564860/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-564860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:26:09.288804  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:26:09.288834  142411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:26:09.288856  142411 buildroot.go:174] setting up certificates
	I0420 01:26:09.288867  142411 provision.go:84] configureAuth start
	I0420 01:26:09.288877  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.289286  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:09.292454  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.292884  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.292923  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.293076  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.295234  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.295537  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.295565  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.295675  142411 provision.go:143] copyHostCerts
	I0420 01:26:09.295747  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:26:09.295758  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:26:09.295811  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:26:09.295936  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:26:09.295951  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:26:09.295981  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:26:09.296063  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:26:09.296075  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:26:09.296095  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:26:09.296154  142411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-564860 san=[127.0.0.1 192.168.61.91 localhost minikube old-k8s-version-564860]
	I0420 01:26:09.436313  142411 provision.go:177] copyRemoteCerts
	I0420 01:26:09.436373  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:26:09.436401  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.439316  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.439700  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.439743  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.439856  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.440057  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.440226  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.440360  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:09.529141  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:26:09.558376  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0420 01:26:09.586393  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:26:09.615274  142411 provision.go:87] duration metric: took 326.393984ms to configureAuth
	I0420 01:26:09.615300  142411 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:26:09.615501  142411 config.go:182] Loaded profile config "old-k8s-version-564860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:26:09.615590  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.618470  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.618905  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.618938  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.619141  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.619325  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.619505  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.619662  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.619862  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.620073  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.620091  142411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:26:09.924929  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:26:09.924958  142411 machine.go:97] duration metric: took 1.042352034s to provisionDockerMachine
	I0420 01:26:09.924973  142411 start.go:293] postStartSetup for "old-k8s-version-564860" (driver="kvm2")
	I0420 01:26:09.924985  142411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:26:09.925021  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:09.925441  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:26:09.925485  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.927985  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.928377  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.928407  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.928565  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.928770  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.928944  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.929114  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.020189  142411 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:26:10.025578  142411 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:26:10.025607  142411 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:26:10.025707  142411 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:26:10.025795  142411 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:26:10.025888  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:26:10.038138  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:10.065063  142411 start.go:296] duration metric: took 140.07164ms for postStartSetup
	I0420 01:26:10.065111  142411 fix.go:56] duration metric: took 24.94209431s for fixHost
	I0420 01:26:10.065139  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.068099  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.068493  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.068544  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.068697  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.068916  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.069114  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.069255  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.069455  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:10.069662  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:10.069678  142411 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:26:10.190955  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576370.174630368
	
	I0420 01:26:10.190984  142411 fix.go:216] guest clock: 1713576370.174630368
	I0420 01:26:10.190994  142411 fix.go:229] Guest: 2024-04-20 01:26:10.174630368 +0000 UTC Remote: 2024-04-20 01:26:10.065116719 +0000 UTC m=+276.709087933 (delta=109.513649ms)
	I0420 01:26:10.191036  142411 fix.go:200] guest clock delta is within tolerance: 109.513649ms
	I0420 01:26:10.191044  142411 start.go:83] releasing machines lock for "old-k8s-version-564860", held for 25.068071712s
	I0420 01:26:10.191074  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.191368  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:10.194872  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.195333  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.195365  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.195510  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196060  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196253  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196331  142411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:26:10.196375  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.196439  142411 ssh_runner.go:195] Run: cat /version.json
	I0420 01:26:10.196467  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.199156  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199522  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199557  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.199572  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199760  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.199975  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.200098  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.200137  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.200165  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.200326  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.200700  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.200857  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.200992  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.201150  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.283430  142411 ssh_runner.go:195] Run: systemctl --version
	I0420 01:26:10.310703  142411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:26:10.462457  142411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:26:10.470897  142411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:26:10.470993  142411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:26:10.489867  142411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:26:10.489899  142411 start.go:494] detecting cgroup driver to use...
	I0420 01:26:10.489996  142411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:26:10.512741  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:26:10.530013  142411 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:26:10.530077  142411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:26:10.548567  142411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:26:10.565645  142411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:26:10.693390  142411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:26:10.878889  142411 docker.go:233] disabling docker service ...
	I0420 01:26:10.878973  142411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:26:10.901233  142411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:26:10.915219  142411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:26:11.053815  142411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:26:11.201766  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:26:11.218569  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:26:11.240543  142411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0420 01:26:11.240604  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.253384  142411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:26:11.253460  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.268703  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.281575  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.296477  142411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:26:11.312458  142411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:26:11.328008  142411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:26:11.328076  142411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:26:11.349027  142411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:26:11.362064  142411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:11.500624  142411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:26:11.665985  142411 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:26:11.666061  142411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:26:11.672929  142411 start.go:562] Will wait 60s for crictl version
	I0420 01:26:11.673006  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:11.678398  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:26:11.727572  142411 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:26:11.727663  142411 ssh_runner.go:195] Run: crio --version
	I0420 01:26:11.760504  142411 ssh_runner.go:195] Run: crio --version
	I0420 01:26:11.803463  142411 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0420 01:26:11.804782  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:11.807755  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:11.808135  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:11.808177  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:11.808396  142411 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0420 01:26:11.813653  142411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:11.830618  142411 kubeadm.go:877] updating cluster {Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:26:11.830793  142411 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:26:11.830874  142411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:11.889149  142411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:26:11.889218  142411 ssh_runner.go:195] Run: which lz4
	I0420 01:26:11.894461  142411 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:26:11.900427  142411 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:26:11.900456  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0420 01:26:10.183110  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:12.184209  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:11.636722  141746 main.go:141] libmachine: (no-preload-338118) Waiting to get IP...
	I0420 01:26:11.637635  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:11.638048  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:11.638135  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:11.638011  143635 retry.go:31] will retry after 264.135122ms: waiting for machine to come up
	I0420 01:26:11.903486  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:11.904008  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:11.904053  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:11.903958  143635 retry.go:31] will retry after 367.952741ms: waiting for machine to come up
	I0420 01:26:12.273951  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:12.274547  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:12.274584  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:12.274491  143635 retry.go:31] will retry after 390.958735ms: waiting for machine to come up
	I0420 01:26:12.667348  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:12.667888  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:12.667915  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:12.667820  143635 retry.go:31] will retry after 554.212994ms: waiting for machine to come up
	I0420 01:26:13.223423  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:13.224158  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:13.224184  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:13.224058  143635 retry.go:31] will retry after 686.102207ms: waiting for machine to come up
	I0420 01:26:13.911430  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:13.912019  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:13.912042  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:13.911968  143635 retry.go:31] will retry after 875.263983ms: waiting for machine to come up
	I0420 01:26:14.788949  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:14.789431  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:14.789481  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:14.789392  143635 retry.go:31] will retry after 847.129796ms: waiting for machine to come up
	I0420 01:26:15.637863  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:15.638348  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:15.638379  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:15.638288  143635 retry.go:31] will retry after 1.162423805s: waiting for machine to come up
	I0420 01:26:11.866297  142057 pod_ready.go:102] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:13.868499  142057 pod_ready.go:102] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:14.867208  142057 pod_ready.go:92] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.867241  142057 pod_ready.go:81] duration metric: took 5.008488667s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.867254  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.875100  142057 pod_ready.go:92] pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.875119  142057 pod_ready.go:81] duration metric: took 7.856647ms for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.875131  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-crzk6" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.880630  142057 pod_ready.go:92] pod "kube-proxy-crzk6" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.880651  142057 pod_ready.go:81] duration metric: took 5.512379ms for pod "kube-proxy-crzk6" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.880661  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.885625  142057 pod_ready.go:92] pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.885645  142057 pod_ready.go:81] duration metric: took 4.976632ms for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.885656  142057 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.031960  142411 crio.go:462] duration metric: took 2.137532848s to copy over tarball
	I0420 01:26:14.032043  142411 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:26:17.581625  142411 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.549548059s)
	I0420 01:26:17.581660  142411 crio.go:469] duration metric: took 3.549666471s to extract the tarball
	I0420 01:26:17.581672  142411 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:26:17.633172  142411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:17.679514  142411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:26:17.679544  142411 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0420 01:26:17.679710  142411 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.679940  142411 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.680051  142411 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.680061  142411 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.680225  142411 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.680266  142411 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0420 01:26:17.680442  142411 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.680516  142411 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.682336  142411 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.682425  142411 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0420 01:26:17.682428  142411 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.682462  142411 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.682341  142411 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.682512  142411 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.682952  142411 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.682955  142411 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.846602  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.850673  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.866812  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.871983  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.876346  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0420 01:26:17.876745  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.881269  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.985788  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.997662  142411 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0420 01:26:17.997709  142411 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0420 01:26:17.997716  142411 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.997751  142411 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.997778  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:17.997797  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.071610  142411 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0420 01:26:18.071682  142411 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:18.071705  142411 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0420 01:26:18.071741  142411 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:18.071760  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.071793  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.085631  142411 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0420 01:26:18.085689  142411 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0420 01:26:18.085748  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.087239  142411 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0420 01:26:18.087288  142411 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:18.087362  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.094891  142411 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0420 01:26:18.094940  142411 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:18.094989  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.232524  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:18.232595  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:18.232613  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0420 01:26:18.232649  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0420 01:26:18.232595  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:18.232682  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:18.232710  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:14.684499  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:17.185481  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:16.802494  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:16.802977  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:16.803009  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:16.802908  143635 retry.go:31] will retry after 1.370900633s: waiting for machine to come up
	I0420 01:26:18.175474  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:18.175996  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:18.176022  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:18.175943  143635 retry.go:31] will retry after 1.698879408s: waiting for machine to come up
	I0420 01:26:19.876437  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:19.876901  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:19.876932  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:19.876843  143635 retry.go:31] will retry after 2.622833508s: waiting for machine to come up
	I0420 01:26:16.894119  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:18.894941  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:18.408724  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0420 01:26:18.408791  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0420 01:26:18.410041  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0420 01:26:18.410136  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0420 01:26:18.424042  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0420 01:26:18.428203  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0420 01:26:18.428295  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0420 01:26:18.450170  142411 cache_images.go:92] duration metric: took 770.600266ms to LoadCachedImages
	W0420 01:26:18.450288  142411 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0420 01:26:18.450305  142411 kubeadm.go:928] updating node { 192.168.61.91 8443 v1.20.0 crio true true} ...
	I0420 01:26:18.450428  142411 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-564860 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:26:18.450522  142411 ssh_runner.go:195] Run: crio config
	I0420 01:26:18.503362  142411 cni.go:84] Creating CNI manager for ""
	I0420 01:26:18.503407  142411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:18.503427  142411 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:26:18.503463  142411 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-564860 NodeName:old-k8s-version-564860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0420 01:26:18.503671  142411 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-564860"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:26:18.503745  142411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0420 01:26:18.516393  142411 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:26:18.516475  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:26:18.529038  142411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0420 01:26:18.550442  142411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:26:18.572012  142411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0420 01:26:18.595682  142411 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I0420 01:26:18.602036  142411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:18.622226  142411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:18.774466  142411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:26:18.795074  142411 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860 for IP: 192.168.61.91
	I0420 01:26:18.795104  142411 certs.go:194] generating shared ca certs ...
	I0420 01:26:18.795125  142411 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:18.795301  142411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:26:18.795342  142411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:26:18.795352  142411 certs.go:256] generating profile certs ...
	I0420 01:26:18.795433  142411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/client.key
	I0420 01:26:18.795487  142411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key.d235183f
	I0420 01:26:18.795524  142411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key
	I0420 01:26:18.795645  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:26:18.795675  142411 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:26:18.795685  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:26:18.795706  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:26:18.795735  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:26:18.795765  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:26:18.795828  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:18.796607  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:26:18.845581  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:26:18.891065  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:26:18.933536  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:26:18.977381  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0420 01:26:19.009816  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:26:19.042053  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:26:19.090614  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:26:19.119554  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:26:19.147545  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:26:19.177775  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:26:19.211008  142411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:26:19.234399  142411 ssh_runner.go:195] Run: openssl version
	I0420 01:26:19.242808  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:26:19.256132  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.261681  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.261739  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.270546  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:26:19.284112  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:26:19.296998  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.302497  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.302551  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.310883  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:26:19.325130  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:26:19.338964  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.344915  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.344986  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.351926  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:26:19.366428  142411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:26:19.372391  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:26:19.379606  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:26:19.386698  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:26:19.395102  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:26:19.401981  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:26:19.409477  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:26:19.416444  142411 kubeadm.go:391] StartCluster: {Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:26:19.416557  142411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:26:19.416600  142411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:19.460782  142411 cri.go:89] found id: ""
	I0420 01:26:19.460884  142411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:26:19.473812  142411 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:26:19.473832  142411 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:26:19.473838  142411 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:26:19.473899  142411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:26:19.486686  142411 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:26:19.487757  142411 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-564860" does not appear in /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:26:19.488411  142411 kubeconfig.go:62] /home/jenkins/minikube-integration/18703-76456/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-564860" cluster setting kubeconfig missing "old-k8s-version-564860" context setting]
	I0420 01:26:19.489438  142411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:19.491237  142411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:26:19.503483  142411 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.91
	I0420 01:26:19.503519  142411 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:26:19.503530  142411 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:26:19.503597  142411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:19.546350  142411 cri.go:89] found id: ""
	I0420 01:26:19.546438  142411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:26:19.568177  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:26:19.580545  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:26:19.580573  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:26:19.580658  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:26:19.592945  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:26:19.593010  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:26:19.605598  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:26:19.617261  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:26:19.617346  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:26:19.629242  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:26:19.640143  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:26:19.640211  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:26:19.654226  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:26:19.666207  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:26:19.666275  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:26:19.678899  142411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:26:19.694374  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:19.845435  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:20.619142  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:20.891265  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:21.020834  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:21.124545  142411 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:26:21.124652  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:21.625462  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:22.125171  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:22.625565  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:23.125077  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:19.685129  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:22.183561  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:22.502227  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:22.502665  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:22.502696  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:22.502603  143635 retry.go:31] will retry after 3.3877716s: waiting for machine to come up
	I0420 01:26:21.392042  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:23.392579  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:25.394230  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:23.625392  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.125446  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.625035  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:25.125592  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:25.624718  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:26.124803  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:26.625420  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:27.125162  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:27.625475  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:28.125637  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.685014  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:27.182545  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:25.891769  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:25.892321  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:25.892353  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:25.892252  143635 retry.go:31] will retry after 3.395760477s: waiting for machine to come up
	I0420 01:26:29.290361  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:29.290858  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:29.290907  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:29.290791  143635 retry.go:31] will retry after 4.86761736s: waiting for machine to come up
	I0420 01:26:27.892903  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:30.392680  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:28.625781  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.125145  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.625647  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:30.125081  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:30.625404  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:31.124753  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:31.625565  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:32.124750  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:32.624841  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:33.125120  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.682707  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:31.682790  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:33.683549  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:34.162306  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.162883  141746 main.go:141] libmachine: (no-preload-338118) Found IP for machine: 192.168.72.89
	I0420 01:26:34.162912  141746 main.go:141] libmachine: (no-preload-338118) Reserving static IP address...
	I0420 01:26:34.162928  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has current primary IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.163266  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "no-preload-338118", mac: "52:54:00:14:65:26", ip: "192.168.72.89"} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.163296  141746 main.go:141] libmachine: (no-preload-338118) Reserved static IP address: 192.168.72.89
	I0420 01:26:34.163316  141746 main.go:141] libmachine: (no-preload-338118) DBG | skip adding static IP to network mk-no-preload-338118 - found existing host DHCP lease matching {name: "no-preload-338118", mac: "52:54:00:14:65:26", ip: "192.168.72.89"}
	I0420 01:26:34.163335  141746 main.go:141] libmachine: (no-preload-338118) DBG | Getting to WaitForSSH function...
	I0420 01:26:34.163350  141746 main.go:141] libmachine: (no-preload-338118) Waiting for SSH to be available...
	I0420 01:26:34.165641  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.165947  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.165967  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.166136  141746 main.go:141] libmachine: (no-preload-338118) DBG | Using SSH client type: external
	I0420 01:26:34.166161  141746 main.go:141] libmachine: (no-preload-338118) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa (-rw-------)
	I0420 01:26:34.166190  141746 main.go:141] libmachine: (no-preload-338118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:26:34.166216  141746 main.go:141] libmachine: (no-preload-338118) DBG | About to run SSH command:
	I0420 01:26:34.166232  141746 main.go:141] libmachine: (no-preload-338118) DBG | exit 0
	I0420 01:26:34.293435  141746 main.go:141] libmachine: (no-preload-338118) DBG | SSH cmd err, output: <nil>: 
	I0420 01:26:34.293789  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetConfigRaw
	I0420 01:26:34.294381  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:34.296958  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.297355  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.297391  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.297670  141746 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/config.json ...
	I0420 01:26:34.297915  141746 machine.go:94] provisionDockerMachine start ...
	I0420 01:26:34.297945  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:34.298191  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.300645  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.301042  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.301068  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.301280  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.301496  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.301719  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.301895  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.302104  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.302272  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.302284  141746 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:26:34.419082  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:26:34.419113  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.419424  141746 buildroot.go:166] provisioning hostname "no-preload-338118"
	I0420 01:26:34.419452  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.419715  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.422630  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.423010  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.423052  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.423212  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.423415  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.423599  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.423716  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.423928  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.424135  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.424149  141746 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-338118 && echo "no-preload-338118" | sudo tee /etc/hostname
	I0420 01:26:34.555223  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-338118
	
	I0420 01:26:34.555254  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.558217  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.558606  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.558643  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.558792  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.558999  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.559241  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.559423  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.559655  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.559827  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.559844  141746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-338118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-338118/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-338118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:26:34.684192  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:26:34.684226  141746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:26:34.684261  141746 buildroot.go:174] setting up certificates
	I0420 01:26:34.684270  141746 provision.go:84] configureAuth start
	I0420 01:26:34.684289  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.684581  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:34.687363  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.687703  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.687733  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.687876  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.690220  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.690542  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.690569  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.690739  141746 provision.go:143] copyHostCerts
	I0420 01:26:34.690806  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:26:34.690817  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:26:34.690869  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:26:34.691006  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:26:34.691017  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:26:34.691038  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:26:34.691103  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:26:34.691111  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:26:34.691130  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:26:34.691178  141746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.no-preload-338118 san=[127.0.0.1 192.168.72.89 localhost minikube no-preload-338118]
	I0420 01:26:34.899595  141746 provision.go:177] copyRemoteCerts
	I0420 01:26:34.899652  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:26:34.899676  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.902298  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.902745  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.902777  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.902956  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.903150  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.903309  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.903457  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:34.993263  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:26:35.024837  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0420 01:26:35.054254  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 01:26:35.082455  141746 provision.go:87] duration metric: took 398.171071ms to configureAuth
	I0420 01:26:35.082488  141746 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:26:35.082741  141746 config.go:182] Loaded profile config "no-preload-338118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:26:35.082822  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.085868  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.086264  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.086313  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.086481  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.086708  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.086868  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.087051  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.087254  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:35.087424  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:35.087440  141746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:26:35.374277  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:26:35.374305  141746 machine.go:97] duration metric: took 1.076369907s to provisionDockerMachine
	I0420 01:26:35.374327  141746 start.go:293] postStartSetup for "no-preload-338118" (driver="kvm2")
	I0420 01:26:35.374342  141746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:26:35.374366  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.374733  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:26:35.374787  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.378647  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.378998  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.379038  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.379149  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.379353  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.379518  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.379694  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:35.468711  141746 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:26:35.473783  141746 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:26:35.473808  141746 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:26:35.473929  141746 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:26:35.474088  141746 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:26:35.474217  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:26:35.484161  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:35.511695  141746 start.go:296] duration metric: took 137.354669ms for postStartSetup
	I0420 01:26:35.511751  141746 fix.go:56] duration metric: took 25.320502022s for fixHost
	I0420 01:26:35.511780  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.514635  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.515042  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.515067  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.515247  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.515448  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.515663  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.515814  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.515988  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:35.516218  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:35.516240  141746 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:26:35.632029  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576395.615634246
	
	I0420 01:26:35.632057  141746 fix.go:216] guest clock: 1713576395.615634246
	I0420 01:26:35.632067  141746 fix.go:229] Guest: 2024-04-20 01:26:35.615634246 +0000 UTC Remote: 2024-04-20 01:26:35.511757232 +0000 UTC m=+369.861721674 (delta=103.877014ms)
	I0420 01:26:35.632113  141746 fix.go:200] guest clock delta is within tolerance: 103.877014ms
	I0420 01:26:35.632137  141746 start.go:83] releasing machines lock for "no-preload-338118", held for 25.440933699s
	I0420 01:26:35.632168  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.632486  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:35.635888  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.636400  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.636440  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.636751  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637250  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637448  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637547  141746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:26:35.637597  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.637694  141746 ssh_runner.go:195] Run: cat /version.json
	I0420 01:26:35.637720  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.640562  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.640800  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.640953  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.640969  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.641244  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.641389  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.641433  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.641486  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.641644  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.641670  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.641806  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:35.641873  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.641997  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.642163  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:32.892859  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:34.893134  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:35.749528  141746 ssh_runner.go:195] Run: systemctl --version
	I0420 01:26:35.756960  141746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:26:35.912075  141746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:26:35.920264  141746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:26:35.920355  141746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:26:35.937729  141746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:26:35.937753  141746 start.go:494] detecting cgroup driver to use...
	I0420 01:26:35.937811  141746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:26:35.954425  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:26:35.970967  141746 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:26:35.971023  141746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:26:35.986186  141746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:26:36.000803  141746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:26:36.114673  141746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:26:36.273386  141746 docker.go:233] disabling docker service ...
	I0420 01:26:36.273472  141746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:26:36.290471  141746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:26:36.305722  141746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:26:36.459528  141746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:26:36.609105  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:26:36.627255  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:26:36.651459  141746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:26:36.651535  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.663171  141746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:26:36.663255  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.674706  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.686196  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.697909  141746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:26:36.709625  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.720746  141746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.740333  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.752898  141746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:26:36.764600  141746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:26:36.764653  141746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:26:36.780697  141746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:26:36.791440  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:36.936761  141746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:26:37.095374  141746 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:26:37.095475  141746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:26:37.101601  141746 start.go:562] Will wait 60s for crictl version
	I0420 01:26:37.101673  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.106191  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:26:37.152257  141746 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:26:37.152361  141746 ssh_runner.go:195] Run: crio --version
	I0420 01:26:37.187172  141746 ssh_runner.go:195] Run: crio --version
	I0420 01:26:37.225203  141746 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:26:33.625596  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:34.124972  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:34.624791  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:35.125630  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:35.624815  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.125677  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.625631  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:37.125592  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:37.624883  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:38.124924  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.183893  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:38.184381  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:37.226708  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:37.229679  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:37.230090  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:37.230131  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:37.230253  141746 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0420 01:26:37.234914  141746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:37.249029  141746 kubeadm.go:877] updating cluster {Name:no-preload-338118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:26:37.249155  141746 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:26:37.249208  141746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:37.287235  141746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:26:37.287270  141746 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0420 01:26:37.287341  141746 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.287379  141746 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.287387  141746 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.287363  141746 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.287414  141746 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.287378  141746 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.287399  141746 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.287365  141746 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0420 01:26:37.288833  141746 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.288849  141746 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.288863  141746 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.288922  141746 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.288933  141746 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.288831  141746 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.288957  141746 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0420 01:26:37.288985  141746 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.452705  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.462178  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.463495  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0420 01:26:37.469562  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.480726  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.501069  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.517291  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.533934  141746 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0420 01:26:37.533976  141746 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.534032  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.578341  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.602332  141746 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0420 01:26:37.602381  141746 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.602432  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.718979  141746 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0420 01:26:37.719028  141746 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0420 01:26:37.719065  141746 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0420 01:26:37.719093  141746 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.719100  141746 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0420 01:26:37.719126  141746 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.719153  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719220  141746 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0420 01:26:37.719256  141746 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.719067  141746 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.719155  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719306  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.719309  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719036  141746 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.719369  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719154  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.719297  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.733974  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.802462  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.802496  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.802544  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0420 01:26:37.802575  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0420 01:26:37.802637  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.802648  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:37.802648  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.802708  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.802725  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0420 01:26:37.802788  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:37.897150  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0420 01:26:37.897190  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0420 01:26:37.897259  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:37.897268  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0420 01:26:37.897278  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:37.897285  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.897295  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0420 01:26:37.897337  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.902046  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0420 01:26:37.902094  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0420 01:26:37.902151  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:37.902307  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0420 01:26:37.902399  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:37.914016  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0420 01:26:40.184815  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.287511777s)
	I0420 01:26:40.184859  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0420 01:26:40.184918  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.282742718s)
	I0420 01:26:40.184951  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.282534359s)
	I0420 01:26:40.184974  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0420 01:26:40.184981  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0420 01:26:40.185052  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.287690505s)
	I0420 01:26:40.185081  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0420 01:26:40.185113  141746 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:40.185175  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:37.392757  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:39.394094  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:38.624766  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:39.125330  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:39.624953  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.125409  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.625125  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:41.125460  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:41.625041  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:42.125103  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:42.624948  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:43.125237  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.186531  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:42.683524  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:42.252666  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.067465398s)
	I0420 01:26:42.252710  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0420 01:26:42.252735  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:42.252774  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:44.616564  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.363755421s)
	I0420 01:26:44.616614  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0420 01:26:44.616649  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:44.616713  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:41.394300  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:43.895493  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:43.625155  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:44.124986  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:44.624957  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.125834  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.625359  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:46.125706  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:46.625115  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:47.125204  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:47.625746  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:48.124803  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.183628  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:47.684002  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:46.894590  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.277850916s)
	I0420 01:26:46.894626  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0420 01:26:46.894655  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:46.894712  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:49.158327  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.263583483s)
	I0420 01:26:49.158370  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0420 01:26:49.158406  141746 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:49.158478  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:50.223297  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.06478687s)
	I0420 01:26:50.223344  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0420 01:26:50.223382  141746 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:50.223452  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:46.393020  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:48.394414  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:50.893840  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:48.624957  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:49.125441  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:49.625078  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.124787  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.624817  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:51.125211  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:51.625408  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:52.124903  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:52.624826  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:53.124728  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.183173  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:52.183563  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:54.187354  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.963876859s)
	I0420 01:26:54.187388  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0420 01:26:54.187416  141746 cache_images.go:123] Successfully loaded all cached images
	I0420 01:26:54.187426  141746 cache_images.go:92] duration metric: took 16.900140079s to LoadCachedImages
	I0420 01:26:54.187439  141746 kubeadm.go:928] updating node { 192.168.72.89 8443 v1.30.0 crio true true} ...
	I0420 01:26:54.187545  141746 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-338118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:26:54.187608  141746 ssh_runner.go:195] Run: crio config
	I0420 01:26:54.245888  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:26:54.245914  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:54.245928  141746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:26:54.245954  141746 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.89 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-338118 NodeName:no-preload-338118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:26:54.246153  141746 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-338118"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:26:54.246232  141746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:26:54.259262  141746 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:26:54.259360  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:26:54.270769  141746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0420 01:26:54.290436  141746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:26:54.311846  141746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0420 01:26:54.332517  141746 ssh_runner.go:195] Run: grep 192.168.72.89	control-plane.minikube.internal$ /etc/hosts
	I0420 01:26:54.336874  141746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:54.350084  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:54.466328  141746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:26:54.484511  141746 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118 for IP: 192.168.72.89
	I0420 01:26:54.484545  141746 certs.go:194] generating shared ca certs ...
	I0420 01:26:54.484609  141746 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:54.484846  141746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:26:54.484960  141746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:26:54.484996  141746 certs.go:256] generating profile certs ...
	I0420 01:26:54.485165  141746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/client.key
	I0420 01:26:54.485273  141746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.key.f8d917a4
	I0420 01:26:54.485353  141746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.key
	I0420 01:26:54.485543  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:26:54.485604  141746 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:26:54.485622  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:26:54.485667  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:26:54.485707  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:26:54.485741  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:26:54.485804  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:54.486486  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:26:54.539867  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:26:54.575443  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:26:54.609857  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:26:54.638338  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0420 01:26:54.672043  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:26:54.704197  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:26:54.733771  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0420 01:26:54.761911  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:26:54.789278  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:26:54.816890  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:26:54.845884  141746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:26:54.864508  141746 ssh_runner.go:195] Run: openssl version
	I0420 01:26:54.870717  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:26:54.883192  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.888532  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.888588  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.895258  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:26:54.907346  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:26:54.919360  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.924700  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.924773  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.931133  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:26:54.942845  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:26:54.954785  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.959769  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.959856  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.966061  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:26:54.978389  141746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:26:54.983591  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:26:54.990157  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:26:54.996977  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:26:55.004103  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:26:55.010928  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:26:55.018024  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:26:55.024639  141746 kubeadm.go:391] StartCluster: {Name:no-preload-338118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:26:55.024733  141746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:26:55.024784  141746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:55.073888  141746 cri.go:89] found id: ""
	I0420 01:26:55.073954  141746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:26:55.087179  141746 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:26:55.087199  141746 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:26:55.087208  141746 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:26:55.087255  141746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:26:55.098975  141746 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:26:55.100487  141746 kubeconfig.go:125] found "no-preload-338118" server: "https://192.168.72.89:8443"
	I0420 01:26:55.103557  141746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:26:55.114871  141746 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.89
	I0420 01:26:55.114900  141746 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:26:55.114914  141746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:26:55.114983  141746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:55.174863  141746 cri.go:89] found id: ""
	I0420 01:26:55.174969  141746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:26:55.192867  141746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:26:55.203842  141746 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:26:55.203866  141746 kubeadm.go:156] found existing configuration files:
	
	I0420 01:26:55.203919  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:26:55.214476  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:26:55.214534  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:26:55.224728  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:26:55.235353  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:26:55.235403  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:26:55.245905  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:26:55.256614  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:26:55.256678  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:26:55.266909  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:26:55.276249  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:26:55.276294  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:26:55.285758  141746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:26:55.295896  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:55.418331  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:53.394623  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:55.893492  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:53.625614  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.125487  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.625414  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:55.125150  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:55.624831  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:56.125438  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:56.625450  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.125591  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.625757  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:58.124963  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.186686  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:56.681991  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:58.682958  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:56.156484  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.376987  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.450655  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.517915  141746 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:26:56.518018  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.018277  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.518215  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.538017  141746 api_server.go:72] duration metric: took 1.020104679s to wait for apiserver process to appear ...
	I0420 01:26:57.538045  141746 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:26:57.538070  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:26:58.392944  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:00.892688  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:58.625549  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:59.125177  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:59.624704  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:00.125709  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:00.625346  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.124849  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.624947  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:02.125407  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:02.625704  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:03.125695  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.182564  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:03.183451  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:02.538442  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:02.538498  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:03.396891  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:05.896375  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:03.625423  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:04.124806  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:04.625232  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.124917  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.624983  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:06.124851  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:06.625029  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:07.125554  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:07.625163  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:08.125455  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.682216  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:07.683636  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:07.538926  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:07.538973  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:08.392765  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:10.392933  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:08.625100  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:09.125395  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:09.625454  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.125615  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.624892  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:11.125366  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:11.625074  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:12.125165  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:12.625629  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:13.124824  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.182884  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:12.683893  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:12.540046  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:12.540121  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:12.393561  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:14.893756  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:13.625040  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:14.125511  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:14.624890  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.125622  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.625393  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:16.125215  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:16.625561  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:17.125263  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:17.624772  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:18.125597  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.183734  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:17.683742  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:17.540652  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:17.540701  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.076616  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": read tcp 192.168.72.1:34174->192.168.72.89:8443: read: connection reset by peer
	I0420 01:27:18.076671  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.077186  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": dial tcp 192.168.72.89:8443: connect: connection refused
	I0420 01:27:18.538798  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.539454  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": dial tcp 192.168.72.89:8443: connect: connection refused
	I0420 01:27:19.039080  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:17.393196  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:19.395273  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:18.624948  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:19.124956  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:19.625579  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:20.124827  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:20.625212  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:21.125476  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:21.125553  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:21.174633  142411 cri.go:89] found id: ""
	I0420 01:27:21.174668  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.174679  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:21.174686  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:21.174767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:21.218230  142411 cri.go:89] found id: ""
	I0420 01:27:21.218263  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.218275  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:21.218284  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:21.218369  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:21.258886  142411 cri.go:89] found id: ""
	I0420 01:27:21.258916  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.258926  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:21.258932  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:21.259003  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:21.306725  142411 cri.go:89] found id: ""
	I0420 01:27:21.306758  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.306769  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:21.306777  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:21.306843  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:21.349049  142411 cri.go:89] found id: ""
	I0420 01:27:21.349086  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.349098  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:21.349106  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:21.349174  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:21.392312  142411 cri.go:89] found id: ""
	I0420 01:27:21.392338  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.392346  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:21.392352  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:21.392425  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:21.434121  142411 cri.go:89] found id: ""
	I0420 01:27:21.434148  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.434156  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:21.434162  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:21.434210  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:21.473728  142411 cri.go:89] found id: ""
	I0420 01:27:21.473754  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.473762  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:21.473772  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:21.473785  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:21.537607  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:21.537648  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:21.554563  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:21.554604  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:21.674778  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:21.674803  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:21.674829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:21.740625  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:21.740666  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:20.182461  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:22.682574  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:24.039641  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:24.039690  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:21.397381  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:23.893642  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:24.284890  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:24.301486  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:24.301571  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:24.340987  142411 cri.go:89] found id: ""
	I0420 01:27:24.341012  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.341021  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:24.341026  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:24.341102  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:24.379983  142411 cri.go:89] found id: ""
	I0420 01:27:24.380014  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.380024  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:24.380029  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:24.380113  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:24.438700  142411 cri.go:89] found id: ""
	I0420 01:27:24.438729  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.438739  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:24.438745  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:24.438795  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:24.487761  142411 cri.go:89] found id: ""
	I0420 01:27:24.487793  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.487802  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:24.487808  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:24.487870  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:24.529408  142411 cri.go:89] found id: ""
	I0420 01:27:24.529439  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.529448  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:24.529453  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:24.529523  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:24.572782  142411 cri.go:89] found id: ""
	I0420 01:27:24.572817  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.572831  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:24.572841  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:24.572910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:24.620651  142411 cri.go:89] found id: ""
	I0420 01:27:24.620684  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.620696  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:24.620704  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:24.620769  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:24.659481  142411 cri.go:89] found id: ""
	I0420 01:27:24.659513  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.659525  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:24.659537  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:24.659552  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:24.714483  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:24.714517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:24.730279  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:24.730316  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:24.804883  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:24.804909  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:24.804926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:24.879557  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:24.879602  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:27.431026  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:27.448112  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:27.448176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:27.494959  142411 cri.go:89] found id: ""
	I0420 01:27:27.494988  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.494999  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:27.495007  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:27.495075  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:27.532023  142411 cri.go:89] found id: ""
	I0420 01:27:27.532055  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.532066  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:27.532075  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:27.532151  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:27.578551  142411 cri.go:89] found id: ""
	I0420 01:27:27.578600  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.578613  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:27.578621  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:27.578692  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:27.618248  142411 cri.go:89] found id: ""
	I0420 01:27:27.618277  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.618288  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:27.618296  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:27.618363  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:27.655682  142411 cri.go:89] found id: ""
	I0420 01:27:27.655714  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.655723  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:27.655729  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:27.655787  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:27.696355  142411 cri.go:89] found id: ""
	I0420 01:27:27.696389  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.696400  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:27.696408  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:27.696478  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:27.735354  142411 cri.go:89] found id: ""
	I0420 01:27:27.735378  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.735396  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:27.735402  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:27.735460  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:27.775234  142411 cri.go:89] found id: ""
	I0420 01:27:27.775261  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.775269  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:27.775277  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:27.775294  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:27.789970  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:27.790005  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:27.873345  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:27.873371  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:27.873387  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:27.952309  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:27.952353  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:28.003746  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:28.003792  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:24.683122  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:27.182311  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:29.040691  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:29.040743  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:26.394161  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:28.893349  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:30.893785  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:30.555691  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:30.570962  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:30.571041  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:30.613185  142411 cri.go:89] found id: ""
	I0420 01:27:30.613218  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.613227  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:30.613233  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:30.613291  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:30.654494  142411 cri.go:89] found id: ""
	I0420 01:27:30.654520  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.654529  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:30.654535  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:30.654600  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:30.702605  142411 cri.go:89] found id: ""
	I0420 01:27:30.702634  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.702646  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:30.702653  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:30.702719  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:30.742072  142411 cri.go:89] found id: ""
	I0420 01:27:30.742104  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.742115  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:30.742123  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:30.742191  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:30.793199  142411 cri.go:89] found id: ""
	I0420 01:27:30.793232  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.793244  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:30.793252  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:30.793340  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:30.832978  142411 cri.go:89] found id: ""
	I0420 01:27:30.833019  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.833034  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:30.833044  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:30.833126  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:30.875606  142411 cri.go:89] found id: ""
	I0420 01:27:30.875641  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.875655  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:30.875662  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:30.875729  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:30.917288  142411 cri.go:89] found id: ""
	I0420 01:27:30.917335  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.917348  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:30.917360  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:30.917375  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:30.996446  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:30.996469  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:30.996485  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:31.080494  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:31.080543  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:31.141226  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:31.141260  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:31.212808  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:31.212845  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:29.182651  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:31.183179  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:33.682476  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:34.041737  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:34.041789  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:33.393756  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:35.395120  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:33.728927  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:33.745749  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:33.745835  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:33.788813  142411 cri.go:89] found id: ""
	I0420 01:27:33.788845  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.788859  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:33.788868  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:33.788936  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:33.834918  142411 cri.go:89] found id: ""
	I0420 01:27:33.834948  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.834957  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:33.834963  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:33.835026  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:33.873928  142411 cri.go:89] found id: ""
	I0420 01:27:33.873960  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.873972  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:33.873977  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:33.874027  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:33.921462  142411 cri.go:89] found id: ""
	I0420 01:27:33.921497  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.921510  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:33.921519  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:33.921606  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:33.962280  142411 cri.go:89] found id: ""
	I0420 01:27:33.962308  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.962320  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:33.962329  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:33.962390  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:34.002582  142411 cri.go:89] found id: ""
	I0420 01:27:34.002616  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.002627  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:34.002635  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:34.002707  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:34.047383  142411 cri.go:89] found id: ""
	I0420 01:27:34.047410  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.047421  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:34.047428  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:34.047489  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:34.088296  142411 cri.go:89] found id: ""
	I0420 01:27:34.088341  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.088352  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:34.088364  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:34.088381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:34.180338  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:34.180380  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:34.224386  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:34.224422  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:34.278451  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:34.278488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:34.294377  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:34.294409  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:34.377115  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:36.878000  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:36.896875  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:36.896953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:36.953915  142411 cri.go:89] found id: ""
	I0420 01:27:36.953954  142411 logs.go:276] 0 containers: []
	W0420 01:27:36.953968  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:36.953977  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:36.954056  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:36.998223  142411 cri.go:89] found id: ""
	I0420 01:27:36.998250  142411 logs.go:276] 0 containers: []
	W0420 01:27:36.998260  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:36.998268  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:36.998337  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:37.069299  142411 cri.go:89] found id: ""
	I0420 01:27:37.069346  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.069358  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:37.069366  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:37.069436  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:37.112068  142411 cri.go:89] found id: ""
	I0420 01:27:37.112100  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.112112  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:37.112119  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:37.112175  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:37.155883  142411 cri.go:89] found id: ""
	I0420 01:27:37.155913  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.155924  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:37.155933  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:37.156006  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:37.200979  142411 cri.go:89] found id: ""
	I0420 01:27:37.201007  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.201018  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:37.201026  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:37.201091  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:37.241639  142411 cri.go:89] found id: ""
	I0420 01:27:37.241667  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.241678  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:37.241686  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:37.241748  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:37.281845  142411 cri.go:89] found id: ""
	I0420 01:27:37.281883  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.281894  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:37.281907  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:37.281923  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:37.327428  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:37.327463  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:37.385213  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:37.385248  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:37.400158  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:37.400190  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:37.476662  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:37.476687  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:37.476700  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:37.090819  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:27:37.090858  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:27:37.090877  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:37.124020  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:37.124076  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:37.538389  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:37.550894  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:37.550930  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:38.038486  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:38.051983  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:38.052019  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:38.538297  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:38.544961  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 200:
	ok
	I0420 01:27:38.553038  141746 api_server.go:141] control plane version: v1.30.0
	I0420 01:27:38.553065  141746 api_server.go:131] duration metric: took 41.015012791s to wait for apiserver health ...
	I0420 01:27:38.553075  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:27:38.553081  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:27:38.554687  141746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:27:35.684396  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:38.183391  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:38.555934  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:27:38.575384  141746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:27:38.609934  141746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:27:38.637152  141746 system_pods.go:59] 8 kube-system pods found
	I0420 01:27:38.637184  141746 system_pods.go:61] "coredns-7db6d8ff4d-r2hs7" [981840a2-82cd-49e0-8d4f-fbaf05290668] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:27:38.637191  141746 system_pods.go:61] "etcd-no-preload-338118" [92fc0da4-63d3-4f34-a5a6-27b73e7e210d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:27:38.637198  141746 system_pods.go:61] "kube-apiserver-no-preload-338118" [9f7bd5df-f733-4944-9ad2-0c9f0ea4529b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:27:38.637206  141746 system_pods.go:61] "kube-controller-manager-no-preload-338118" [d7a0bd6a-2cd0-4b27-ae83-ae38c1a20c63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:27:38.637215  141746 system_pods.go:61] "kube-proxy-zgq86" [d379ae65-c579-47e4-b055-6512e74868a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0420 01:27:38.637219  141746 system_pods.go:61] "kube-scheduler-no-preload-338118" [99558213-289d-4682-ba8e-20175c815563] Running
	I0420 01:27:38.637225  141746 system_pods.go:61] "metrics-server-569cc877fc-lcbcz" [1d2b716a-555a-46aa-ae27-c40553c94288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:27:38.637229  141746 system_pods.go:61] "storage-provisioner" [a8316010-8689-42aa-9741-227bf55a16bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:27:38.637236  141746 system_pods.go:74] duration metric: took 27.280844ms to wait for pod list to return data ...
	I0420 01:27:38.637243  141746 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:27:38.640744  141746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:27:38.640774  141746 node_conditions.go:123] node cpu capacity is 2
	I0420 01:27:38.640791  141746 node_conditions.go:105] duration metric: took 3.542872ms to run NodePressure ...
	I0420 01:27:38.640813  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:27:38.979785  141746 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:27:38.987541  141746 kubeadm.go:733] kubelet initialised
	I0420 01:27:38.987570  141746 kubeadm.go:734] duration metric: took 7.752383ms waiting for restarted kubelet to initialise ...
	I0420 01:27:38.987582  141746 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:27:38.994929  141746 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:38.999872  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:38.999903  141746 pod_ready.go:81] duration metric: took 4.940439ms for pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:38.999915  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:38.999923  141746 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.004575  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "etcd-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.004595  141746 pod_ready.go:81] duration metric: took 4.662163ms for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.004603  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "etcd-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.004608  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.012365  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "kube-apiserver-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.012386  141746 pod_ready.go:81] duration metric: took 7.773001ms for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.012393  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "kube-apiserver-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.012400  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.019091  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.019125  141746 pod_ready.go:81] duration metric: took 6.70398ms for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.019137  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.019146  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zgq86" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:37.894228  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:39.899004  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:40.075888  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:40.091313  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:40.091389  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:40.134013  142411 cri.go:89] found id: ""
	I0420 01:27:40.134039  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.134048  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:40.134053  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:40.134136  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:40.182108  142411 cri.go:89] found id: ""
	I0420 01:27:40.182140  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.182151  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:40.182158  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:40.182222  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:40.225406  142411 cri.go:89] found id: ""
	I0420 01:27:40.225438  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.225447  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:40.225453  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:40.225539  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:40.267599  142411 cri.go:89] found id: ""
	I0420 01:27:40.267627  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.267636  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:40.267645  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:40.267790  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:40.309385  142411 cri.go:89] found id: ""
	I0420 01:27:40.309418  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.309439  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:40.309448  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:40.309525  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:40.351947  142411 cri.go:89] found id: ""
	I0420 01:27:40.351980  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.351993  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:40.352003  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:40.352079  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:40.395583  142411 cri.go:89] found id: ""
	I0420 01:27:40.395614  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.395623  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:40.395629  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:40.395692  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:40.441348  142411 cri.go:89] found id: ""
	I0420 01:27:40.441397  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.441412  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:40.441426  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:40.441445  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:40.498231  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:40.498268  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:40.514550  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:40.514578  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:40.593580  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:40.593614  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:40.593631  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:40.671736  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:40.671778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:43.224892  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:43.240876  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:43.240939  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:43.281583  142411 cri.go:89] found id: ""
	I0420 01:27:43.281621  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.281634  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:43.281643  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:43.281705  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:43.321079  142411 cri.go:89] found id: ""
	I0420 01:27:43.321115  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.321125  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:43.321132  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:43.321277  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:43.365827  142411 cri.go:89] found id: ""
	I0420 01:27:43.365855  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.365864  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:43.365870  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:43.365921  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:40.184872  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:42.683826  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:41.025729  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:43.025868  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:45.526436  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:42.393681  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:44.401124  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:43.404317  142411 cri.go:89] found id: ""
	I0420 01:27:43.404349  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.404361  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:43.404370  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:43.404443  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:43.449268  142411 cri.go:89] found id: ""
	I0420 01:27:43.449299  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.449323  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:43.449331  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:43.449408  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:43.487782  142411 cri.go:89] found id: ""
	I0420 01:27:43.487829  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.487837  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:43.487844  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:43.487909  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:43.526650  142411 cri.go:89] found id: ""
	I0420 01:27:43.526677  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.526688  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:43.526695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:43.526755  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:43.565288  142411 cri.go:89] found id: ""
	I0420 01:27:43.565328  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.565340  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:43.565352  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:43.565368  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:43.618013  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:43.618046  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:43.634064  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:43.634101  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:43.710633  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:43.710663  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:43.710679  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:43.796658  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:43.796709  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:46.352329  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:46.366848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:46.366935  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:46.413643  142411 cri.go:89] found id: ""
	I0420 01:27:46.413676  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.413687  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:46.413695  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:46.413762  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:46.457976  142411 cri.go:89] found id: ""
	I0420 01:27:46.458002  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.458011  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:46.458020  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:46.458086  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:46.500291  142411 cri.go:89] found id: ""
	I0420 01:27:46.500317  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.500328  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:46.500334  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:46.500398  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:46.541279  142411 cri.go:89] found id: ""
	I0420 01:27:46.541331  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.541343  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:46.541359  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:46.541442  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:46.585613  142411 cri.go:89] found id: ""
	I0420 01:27:46.585642  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.585654  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:46.585661  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:46.585726  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:46.634400  142411 cri.go:89] found id: ""
	I0420 01:27:46.634430  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.634441  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:46.634450  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:46.634534  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:46.676276  142411 cri.go:89] found id: ""
	I0420 01:27:46.676305  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.676313  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:46.676320  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:46.676380  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:46.719323  142411 cri.go:89] found id: ""
	I0420 01:27:46.719356  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.719369  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:46.719381  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:46.719398  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:46.799735  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:46.799765  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:46.799790  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:46.878323  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:46.878371  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:46.931870  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:46.931902  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:46.983217  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:46.983250  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:45.182485  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:47.183499  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:47.526708  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:50.034262  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:46.897249  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:49.393599  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:49.500147  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:49.517380  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:49.517461  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:49.561300  142411 cri.go:89] found id: ""
	I0420 01:27:49.561347  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.561358  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:49.561365  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:49.561432  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:49.604569  142411 cri.go:89] found id: ""
	I0420 01:27:49.604594  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.604608  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:49.604614  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:49.604664  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:49.644952  142411 cri.go:89] found id: ""
	I0420 01:27:49.644983  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.644999  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:49.645006  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:49.645071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:49.694719  142411 cri.go:89] found id: ""
	I0420 01:27:49.694749  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.694757  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:49.694764  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:49.694815  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:49.743821  142411 cri.go:89] found id: ""
	I0420 01:27:49.743849  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.743857  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:49.743865  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:49.743936  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:49.789125  142411 cri.go:89] found id: ""
	I0420 01:27:49.789152  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.789161  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:49.789167  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:49.789233  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:49.828794  142411 cri.go:89] found id: ""
	I0420 01:27:49.828829  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.828841  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:49.828848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:49.828913  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:49.873335  142411 cri.go:89] found id: ""
	I0420 01:27:49.873366  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.873375  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:49.873385  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:49.873397  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:49.930590  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:49.930632  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:49.946850  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:49.946889  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:50.039200  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:50.039220  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:50.039236  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:50.122067  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:50.122118  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:52.664342  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:52.682978  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:52.683061  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:52.733806  142411 cri.go:89] found id: ""
	I0420 01:27:52.733836  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.733848  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:52.733855  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:52.733921  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:52.785977  142411 cri.go:89] found id: ""
	I0420 01:27:52.786008  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.786020  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:52.786027  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:52.786092  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:52.826957  142411 cri.go:89] found id: ""
	I0420 01:27:52.826987  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.826995  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:52.827001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:52.827056  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:52.876208  142411 cri.go:89] found id: ""
	I0420 01:27:52.876251  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.876265  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:52.876276  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:52.876354  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:52.918629  142411 cri.go:89] found id: ""
	I0420 01:27:52.918666  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.918679  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:52.918687  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:52.918767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:52.967604  142411 cri.go:89] found id: ""
	I0420 01:27:52.967646  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.967655  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:52.967661  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:52.967729  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:53.010948  142411 cri.go:89] found id: ""
	I0420 01:27:53.010975  142411 logs.go:276] 0 containers: []
	W0420 01:27:53.010983  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:53.010988  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:53.011039  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:53.055569  142411 cri.go:89] found id: ""
	I0420 01:27:53.055594  142411 logs.go:276] 0 containers: []
	W0420 01:27:53.055611  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:53.055620  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:53.055633  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:53.071038  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:53.071067  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:53.151334  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:53.151364  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:53.151381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:53.238509  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:53.238553  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:53.284898  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:53.284945  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:49.183562  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.682524  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:53.684003  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.027739  141746 pod_ready.go:92] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"True"
	I0420 01:27:51.027773  141746 pod_ready.go:81] duration metric: took 12.008613872s for pod "kube-proxy-zgq86" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.027785  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.033100  141746 pod_ready.go:92] pod "kube-scheduler-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:27:51.033124  141746 pod_ready.go:81] duration metric: took 5.331694ms for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.033136  141746 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:53.041387  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:55.542345  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.896822  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:54.395015  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:55.843065  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:55.856928  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:55.857001  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:55.903058  142411 cri.go:89] found id: ""
	I0420 01:27:55.903092  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.903103  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:55.903111  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:55.903170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:55.944369  142411 cri.go:89] found id: ""
	I0420 01:27:55.944402  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.944414  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:55.944421  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:55.944474  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:55.983485  142411 cri.go:89] found id: ""
	I0420 01:27:55.983510  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.983517  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:55.983523  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:55.983571  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:56.021931  142411 cri.go:89] found id: ""
	I0420 01:27:56.021956  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.021964  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:56.021970  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:56.022019  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:56.066671  142411 cri.go:89] found id: ""
	I0420 01:27:56.066705  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.066717  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:56.066724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:56.066788  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:56.107724  142411 cri.go:89] found id: ""
	I0420 01:27:56.107783  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.107794  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:56.107800  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:56.107854  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:56.149201  142411 cri.go:89] found id: ""
	I0420 01:27:56.149234  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.149246  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:56.149255  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:56.149328  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:56.189580  142411 cri.go:89] found id: ""
	I0420 01:27:56.189621  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.189633  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:56.189645  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:56.189661  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:56.243425  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:56.243462  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:56.261043  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:56.261079  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:56.341944  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:56.341967  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:56.341980  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:56.423252  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:56.423294  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:55.684408  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.183545  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:57.542492  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:00.040617  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:56.892991  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.893124  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:00.893660  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.968894  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:58.984559  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:58.984648  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:59.021603  142411 cri.go:89] found id: ""
	I0420 01:27:59.021634  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.021655  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:59.021666  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:59.021756  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:59.061592  142411 cri.go:89] found id: ""
	I0420 01:27:59.061626  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.061642  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:59.061649  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:59.061701  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:59.101956  142411 cri.go:89] found id: ""
	I0420 01:27:59.101986  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.101996  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:59.102003  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:59.102072  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:59.141104  142411 cri.go:89] found id: ""
	I0420 01:27:59.141136  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.141145  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:59.141151  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:59.141221  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:59.188973  142411 cri.go:89] found id: ""
	I0420 01:27:59.189005  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.189014  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:59.189022  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:59.189107  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:59.232598  142411 cri.go:89] found id: ""
	I0420 01:27:59.232632  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.232641  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:59.232647  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:59.232704  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:59.272623  142411 cri.go:89] found id: ""
	I0420 01:27:59.272660  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.272669  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:59.272675  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:59.272739  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:59.309951  142411 cri.go:89] found id: ""
	I0420 01:27:59.309977  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.309984  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:59.309994  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:59.310005  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:59.366589  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:59.366626  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:59.382724  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:59.382756  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:59.461072  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:59.461102  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:59.461122  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:59.544736  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:59.544769  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:02.089118  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:02.105402  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:02.105483  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:02.144665  142411 cri.go:89] found id: ""
	I0420 01:28:02.144691  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.144700  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:02.144706  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:02.144759  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:02.187471  142411 cri.go:89] found id: ""
	I0420 01:28:02.187498  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.187508  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:02.187515  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:02.187576  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:02.229206  142411 cri.go:89] found id: ""
	I0420 01:28:02.229233  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.229241  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:02.229247  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:02.229335  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:02.279425  142411 cri.go:89] found id: ""
	I0420 01:28:02.279464  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.279478  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:02.279488  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:02.279577  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:02.323033  142411 cri.go:89] found id: ""
	I0420 01:28:02.323066  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.323082  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:02.323090  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:02.323155  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:02.360121  142411 cri.go:89] found id: ""
	I0420 01:28:02.360158  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.360170  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:02.360178  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:02.360244  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:02.398756  142411 cri.go:89] found id: ""
	I0420 01:28:02.398786  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.398797  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:02.398804  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:02.398867  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:02.437982  142411 cri.go:89] found id: ""
	I0420 01:28:02.438010  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.438018  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:02.438028  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:02.438041  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:02.489396  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:02.489434  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:02.506764  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:02.506796  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:02.591894  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:02.591915  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:02.591929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:02.675241  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:02.675281  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:00.683139  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:02.684787  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:02.540829  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.041823  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:03.393076  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.396351  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.224296  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:05.238522  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:05.238593  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:05.278495  142411 cri.go:89] found id: ""
	I0420 01:28:05.278529  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.278540  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:05.278549  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:05.278621  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:05.318096  142411 cri.go:89] found id: ""
	I0420 01:28:05.318122  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.318130  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:05.318136  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:05.318196  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:05.358607  142411 cri.go:89] found id: ""
	I0420 01:28:05.358636  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.358653  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:05.358658  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:05.358749  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:05.417163  142411 cri.go:89] found id: ""
	I0420 01:28:05.417199  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.417211  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:05.417218  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:05.417284  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:05.468566  142411 cri.go:89] found id: ""
	I0420 01:28:05.468599  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.468610  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:05.468619  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:05.468691  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:05.514005  142411 cri.go:89] found id: ""
	I0420 01:28:05.514037  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.514047  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:05.514055  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:05.514112  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:05.554972  142411 cri.go:89] found id: ""
	I0420 01:28:05.555001  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.555012  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:05.555020  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:05.555083  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:05.596736  142411 cri.go:89] found id: ""
	I0420 01:28:05.596764  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.596773  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:05.596787  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:05.596800  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:05.649680  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:05.649719  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:05.667583  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:05.667614  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:05.743886  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:05.743922  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:05.743939  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:05.827827  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:05.827863  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:08.384615  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:05.181917  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.182902  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.541045  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:09.542114  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.892610  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:10.392899  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:08.401190  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:08.403071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:08.445453  142411 cri.go:89] found id: ""
	I0420 01:28:08.445486  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.445497  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:08.445505  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:08.445573  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:08.487598  142411 cri.go:89] found id: ""
	I0420 01:28:08.487636  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.487649  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:08.487657  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:08.487727  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:08.531416  142411 cri.go:89] found id: ""
	I0420 01:28:08.531445  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.531457  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:08.531465  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:08.531526  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:08.574964  142411 cri.go:89] found id: ""
	I0420 01:28:08.575000  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.575012  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:08.575020  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:08.575075  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:08.612644  142411 cri.go:89] found id: ""
	I0420 01:28:08.612679  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.612688  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:08.612695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:08.612748  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:08.651775  142411 cri.go:89] found id: ""
	I0420 01:28:08.651800  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.651811  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:08.651817  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:08.651869  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:08.692869  142411 cri.go:89] found id: ""
	I0420 01:28:08.692894  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.692902  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:08.692908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:08.692957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:08.731765  142411 cri.go:89] found id: ""
	I0420 01:28:08.731794  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.731805  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:08.731817  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:08.731836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:08.747401  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:08.747445  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:08.831069  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:08.831091  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:08.831110  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:08.919053  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:08.919095  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:08.965814  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:08.965854  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:11.518303  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:11.535213  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:11.535294  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:11.577182  142411 cri.go:89] found id: ""
	I0420 01:28:11.577214  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.577223  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:11.577229  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:11.577289  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:11.615023  142411 cri.go:89] found id: ""
	I0420 01:28:11.615055  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.615064  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:11.615070  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:11.615138  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:11.654062  142411 cri.go:89] found id: ""
	I0420 01:28:11.654089  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.654097  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:11.654104  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:11.654170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:11.700846  142411 cri.go:89] found id: ""
	I0420 01:28:11.700875  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.700885  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:11.700892  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:11.700966  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:11.743061  142411 cri.go:89] found id: ""
	I0420 01:28:11.743089  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.743100  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:11.743109  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:11.743175  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:11.783651  142411 cri.go:89] found id: ""
	I0420 01:28:11.783687  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.783698  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:11.783706  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:11.783781  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:11.827099  142411 cri.go:89] found id: ""
	I0420 01:28:11.827130  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.827139  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:11.827144  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:11.827197  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:11.867476  142411 cri.go:89] found id: ""
	I0420 01:28:11.867510  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.867523  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:11.867535  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:11.867554  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:11.920211  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:11.920246  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:11.937632  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:11.937670  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:12.014917  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:12.014940  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:12.014955  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:12.096549  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:12.096586  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:09.684447  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.183063  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.041220  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:14.540620  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.893441  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:15.408953  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:14.653783  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:14.667893  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:14.667955  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:14.710098  142411 cri.go:89] found id: ""
	I0420 01:28:14.710153  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.710164  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:14.710172  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:14.710240  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:14.750891  142411 cri.go:89] found id: ""
	I0420 01:28:14.750920  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.750929  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:14.750939  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:14.751010  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:14.794062  142411 cri.go:89] found id: ""
	I0420 01:28:14.794103  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.794127  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:14.794135  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:14.794204  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:14.834333  142411 cri.go:89] found id: ""
	I0420 01:28:14.834363  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.834375  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:14.834383  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:14.834446  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:14.874114  142411 cri.go:89] found id: ""
	I0420 01:28:14.874148  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.874160  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:14.874168  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:14.874238  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:14.912685  142411 cri.go:89] found id: ""
	I0420 01:28:14.912711  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.912720  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:14.912726  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:14.912787  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:14.954050  142411 cri.go:89] found id: ""
	I0420 01:28:14.954076  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.954083  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:14.954089  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:14.954150  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:14.992310  142411 cri.go:89] found id: ""
	I0420 01:28:14.992348  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.992357  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:14.992365  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:14.992388  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:15.047471  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:15.047512  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:15.065800  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:15.065842  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:15.146009  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:15.146037  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:15.146058  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:15.232920  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:15.232962  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:17.781215  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:17.797404  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:17.797466  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:17.840532  142411 cri.go:89] found id: ""
	I0420 01:28:17.840564  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.840573  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:17.840579  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:17.840636  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:17.881562  142411 cri.go:89] found id: ""
	I0420 01:28:17.881588  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.881596  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:17.881602  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:17.881651  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:17.935068  142411 cri.go:89] found id: ""
	I0420 01:28:17.935098  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.935108  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:17.935115  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:17.935177  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:17.980745  142411 cri.go:89] found id: ""
	I0420 01:28:17.980782  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.980795  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:17.980804  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:17.980880  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:18.051120  142411 cri.go:89] found id: ""
	I0420 01:28:18.051153  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.051164  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:18.051171  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:18.051235  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:18.091741  142411 cri.go:89] found id: ""
	I0420 01:28:18.091776  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.091788  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:18.091796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:18.091864  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:18.133438  142411 cri.go:89] found id: ""
	I0420 01:28:18.133472  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.133482  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:18.133488  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:18.133560  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:18.174624  142411 cri.go:89] found id: ""
	I0420 01:28:18.174665  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.174679  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:18.174694  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:18.174713  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:18.228519  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:18.228563  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:18.246452  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:18.246487  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:18.322051  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:18.322074  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:18.322088  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:14.684817  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:17.182405  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:16.541139  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:19.041191  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:17.895052  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:19.895901  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:18.404873  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:18.404904  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:20.950553  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:20.965081  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:20.965139  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:21.007198  142411 cri.go:89] found id: ""
	I0420 01:28:21.007243  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.007255  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:21.007263  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:21.007330  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:21.050991  142411 cri.go:89] found id: ""
	I0420 01:28:21.051019  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.051028  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:21.051034  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:21.051104  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:21.091953  142411 cri.go:89] found id: ""
	I0420 01:28:21.091986  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.091995  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:21.092001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:21.092085  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:21.134134  142411 cri.go:89] found id: ""
	I0420 01:28:21.134164  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.134174  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:21.134181  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:21.134251  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:21.173698  142411 cri.go:89] found id: ""
	I0420 01:28:21.173724  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.173731  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:21.173737  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:21.173801  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:21.221327  142411 cri.go:89] found id: ""
	I0420 01:28:21.221354  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.221362  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:21.221369  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:21.221428  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:21.262752  142411 cri.go:89] found id: ""
	I0420 01:28:21.262780  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.262791  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:21.262798  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:21.262851  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:21.303497  142411 cri.go:89] found id: ""
	I0420 01:28:21.303524  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.303535  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:21.303547  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:21.303563  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:21.358231  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:21.358265  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:21.373723  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:21.373753  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:21.465016  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:21.465044  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:21.465061  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:21.552087  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:21.552117  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:19.683617  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:22.182720  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:21.540588  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.039211  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:22.393170  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.396378  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.099938  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:24.116967  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:24.117045  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:24.159458  142411 cri.go:89] found id: ""
	I0420 01:28:24.159491  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.159501  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:24.159508  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:24.159574  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:24.206028  142411 cri.go:89] found id: ""
	I0420 01:28:24.206054  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.206065  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:24.206072  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:24.206137  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:24.248047  142411 cri.go:89] found id: ""
	I0420 01:28:24.248088  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.248101  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:24.248109  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:24.248176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:24.287867  142411 cri.go:89] found id: ""
	I0420 01:28:24.287898  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.287909  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:24.287917  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:24.287995  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:24.329399  142411 cri.go:89] found id: ""
	I0420 01:28:24.329433  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.329444  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:24.329452  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:24.329519  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:24.367846  142411 cri.go:89] found id: ""
	I0420 01:28:24.367871  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.367882  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:24.367889  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:24.367960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:24.414245  142411 cri.go:89] found id: ""
	I0420 01:28:24.414272  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.414283  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:24.414291  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:24.414354  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:24.453268  142411 cri.go:89] found id: ""
	I0420 01:28:24.453302  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.453331  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:24.453344  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:24.453366  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:24.514501  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:24.514546  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:24.529551  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:24.529591  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:24.613734  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:24.613757  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:24.613775  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:24.693804  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:24.693843  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:27.238443  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:27.254172  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:27.254235  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:27.297048  142411 cri.go:89] found id: ""
	I0420 01:28:27.297101  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.297111  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:27.297119  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:27.297181  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:27.340145  142411 cri.go:89] found id: ""
	I0420 01:28:27.340171  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.340181  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:27.340189  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:27.340316  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:27.383047  142411 cri.go:89] found id: ""
	I0420 01:28:27.383077  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.383089  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:27.383096  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:27.383169  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:27.428088  142411 cri.go:89] found id: ""
	I0420 01:28:27.428122  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.428134  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:27.428142  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:27.428206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:27.468257  142411 cri.go:89] found id: ""
	I0420 01:28:27.468300  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.468310  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:27.468317  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:27.468389  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:27.508834  142411 cri.go:89] found id: ""
	I0420 01:28:27.508873  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.508885  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:27.508892  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:27.508953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:27.548853  142411 cri.go:89] found id: ""
	I0420 01:28:27.548893  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.548901  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:27.548908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:27.548956  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:27.587841  142411 cri.go:89] found id: ""
	I0420 01:28:27.587875  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.587886  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:27.587899  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:27.587917  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:27.667848  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:27.667888  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:27.714820  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:27.714856  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:27.766337  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:27.766381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:27.782585  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:27.782627  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:27.856172  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:24.184768  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.683097  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.040531  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:28.040802  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:30.542386  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.893091  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:29.393546  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:30.356809  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:30.372449  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:30.372529  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:30.422164  142411 cri.go:89] found id: ""
	I0420 01:28:30.422198  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.422209  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:30.422218  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:30.422283  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:30.460367  142411 cri.go:89] found id: ""
	I0420 01:28:30.460395  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.460404  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:30.460411  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:30.460498  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:30.508423  142411 cri.go:89] found id: ""
	I0420 01:28:30.508460  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.508471  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:30.508479  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:30.508546  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:30.553124  142411 cri.go:89] found id: ""
	I0420 01:28:30.553152  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.553161  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:30.553167  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:30.553225  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:30.601866  142411 cri.go:89] found id: ""
	I0420 01:28:30.601908  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.601919  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:30.601939  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:30.602014  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:30.645413  142411 cri.go:89] found id: ""
	I0420 01:28:30.645446  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.645457  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:30.645467  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:30.645539  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:30.690955  142411 cri.go:89] found id: ""
	I0420 01:28:30.690988  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.690997  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:30.691006  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:30.691077  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:30.732146  142411 cri.go:89] found id: ""
	I0420 01:28:30.732186  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.732197  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:30.732209  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:30.732228  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:30.786890  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:30.786928  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:30.802887  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:30.802920  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:30.884422  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:30.884447  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:30.884461  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:30.967504  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:30.967540  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:29.183645  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:31.683218  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.684335  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.044031  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:35.540100  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:31.897363  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:34.392658  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.515720  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:33.531895  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:33.531953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:33.574626  142411 cri.go:89] found id: ""
	I0420 01:28:33.574668  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.574682  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:33.574690  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:33.574757  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:33.620527  142411 cri.go:89] found id: ""
	I0420 01:28:33.620553  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.620562  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:33.620568  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:33.620630  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:33.659685  142411 cri.go:89] found id: ""
	I0420 01:28:33.659711  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.659719  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:33.659724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:33.659773  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:33.699390  142411 cri.go:89] found id: ""
	I0420 01:28:33.699414  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.699422  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:33.699427  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:33.699485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:33.743819  142411 cri.go:89] found id: ""
	I0420 01:28:33.743844  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.743852  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:33.743858  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:33.743907  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:33.788416  142411 cri.go:89] found id: ""
	I0420 01:28:33.788442  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.788450  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:33.788456  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:33.788514  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:33.834105  142411 cri.go:89] found id: ""
	I0420 01:28:33.834129  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.834138  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:33.834144  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:33.834206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:33.884118  142411 cri.go:89] found id: ""
	I0420 01:28:33.884152  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.884164  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:33.884176  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:33.884193  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:33.940493  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:33.940525  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:33.954800  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:33.954829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:34.030788  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:34.030812  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:34.030829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:34.119533  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:34.119574  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:36.667132  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:36.684253  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:36.684334  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:36.723598  142411 cri.go:89] found id: ""
	I0420 01:28:36.723629  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.723641  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:36.723649  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:36.723718  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:36.761563  142411 cri.go:89] found id: ""
	I0420 01:28:36.761594  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.761606  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:36.761614  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:36.761679  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:36.803553  142411 cri.go:89] found id: ""
	I0420 01:28:36.803590  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.803603  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:36.803611  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:36.803674  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:36.840368  142411 cri.go:89] found id: ""
	I0420 01:28:36.840407  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.840421  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:36.840430  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:36.840497  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:36.879689  142411 cri.go:89] found id: ""
	I0420 01:28:36.879724  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.879735  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:36.879743  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:36.879807  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:36.920757  142411 cri.go:89] found id: ""
	I0420 01:28:36.920785  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.920796  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:36.920809  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:36.920871  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:36.957522  142411 cri.go:89] found id: ""
	I0420 01:28:36.957548  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.957556  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:36.957562  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:36.957624  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:36.997358  142411 cri.go:89] found id: ""
	I0420 01:28:36.997390  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.997400  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:36.997409  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:36.997422  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:37.055063  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:37.055105  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:37.070691  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:37.070720  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:37.150114  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:37.150140  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:37.150152  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:37.228676  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:37.228711  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:36.182514  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.183398  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.040622  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:40.539486  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:36.395217  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.893457  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:40.894381  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:39.776620  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:39.792201  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:39.792268  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:39.831544  142411 cri.go:89] found id: ""
	I0420 01:28:39.831568  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.831576  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:39.831588  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:39.831652  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:39.869458  142411 cri.go:89] found id: ""
	I0420 01:28:39.869488  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.869496  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:39.869503  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:39.869564  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:39.911588  142411 cri.go:89] found id: ""
	I0420 01:28:39.911615  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.911626  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:39.911633  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:39.911703  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:39.952458  142411 cri.go:89] found id: ""
	I0420 01:28:39.952489  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.952505  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:39.952513  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:39.952580  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:39.992988  142411 cri.go:89] found id: ""
	I0420 01:28:39.993016  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.993023  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:39.993029  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:39.993117  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:40.038306  142411 cri.go:89] found id: ""
	I0420 01:28:40.038348  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.038359  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:40.038367  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:40.038432  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:40.082185  142411 cri.go:89] found id: ""
	I0420 01:28:40.082219  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.082230  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:40.082238  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:40.082332  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:40.120346  142411 cri.go:89] found id: ""
	I0420 01:28:40.120373  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.120382  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:40.120391  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:40.120405  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:40.173735  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:40.173769  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:40.191808  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:40.191844  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:40.271429  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:40.271456  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:40.271473  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:40.361519  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:40.361558  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:42.938354  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:42.953088  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:42.953167  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:42.992539  142411 cri.go:89] found id: ""
	I0420 01:28:42.992564  142411 logs.go:276] 0 containers: []
	W0420 01:28:42.992571  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:42.992577  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:42.992637  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:43.032017  142411 cri.go:89] found id: ""
	I0420 01:28:43.032059  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.032074  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:43.032082  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:43.032142  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:43.077229  142411 cri.go:89] found id: ""
	I0420 01:28:43.077258  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.077266  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:43.077272  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:43.077342  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:43.117107  142411 cri.go:89] found id: ""
	I0420 01:28:43.117128  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.117139  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:43.117145  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:43.117206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:43.156262  142411 cri.go:89] found id: ""
	I0420 01:28:43.156297  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.156310  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:43.156317  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:43.156384  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:43.195897  142411 cri.go:89] found id: ""
	I0420 01:28:43.195927  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.195935  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:43.195942  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:43.195990  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:43.230468  142411 cri.go:89] found id: ""
	I0420 01:28:43.230498  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.230513  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:43.230522  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:43.230586  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:43.271980  142411 cri.go:89] found id: ""
	I0420 01:28:43.272009  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.272023  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:43.272035  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:43.272050  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:43.331606  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:43.331641  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:43.348411  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:43.348437  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 01:28:40.682973  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:43.182655  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:42.540341  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:45.039729  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:43.393377  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:45.893276  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	W0420 01:28:43.428628  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:43.428654  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:43.428675  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:43.511471  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:43.511506  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:46.056166  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:46.071677  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:46.071744  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:46.110710  142411 cri.go:89] found id: ""
	I0420 01:28:46.110740  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.110753  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:46.110761  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:46.110825  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:46.170680  142411 cri.go:89] found id: ""
	I0420 01:28:46.170712  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.170724  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:46.170731  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:46.170794  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:46.216387  142411 cri.go:89] found id: ""
	I0420 01:28:46.216413  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.216421  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:46.216429  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:46.216485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:46.258641  142411 cri.go:89] found id: ""
	I0420 01:28:46.258674  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.258685  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:46.258694  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:46.258755  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:46.296359  142411 cri.go:89] found id: ""
	I0420 01:28:46.296395  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.296407  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:46.296416  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:46.296480  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:46.335194  142411 cri.go:89] found id: ""
	I0420 01:28:46.335223  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.335238  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:46.335247  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:46.335300  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:46.373748  142411 cri.go:89] found id: ""
	I0420 01:28:46.373777  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.373789  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:46.373796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:46.373860  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:46.416960  142411 cri.go:89] found id: ""
	I0420 01:28:46.416987  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.416995  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:46.417005  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:46.417017  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:46.497542  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:46.497582  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:46.548086  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:46.548136  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:46.607354  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:46.607390  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:46.624379  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:46.624415  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:46.707425  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:45.682511  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.682752  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.046102  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:49.540014  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.895805  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:50.393001  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:49.208459  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:49.223081  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:49.223146  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:49.258688  142411 cri.go:89] found id: ""
	I0420 01:28:49.258718  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.258728  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:49.258734  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:49.258791  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:49.296817  142411 cri.go:89] found id: ""
	I0420 01:28:49.296859  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.296870  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:49.296878  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:49.296941  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:49.337821  142411 cri.go:89] found id: ""
	I0420 01:28:49.337853  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.337863  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:49.337870  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:49.337940  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:49.381360  142411 cri.go:89] found id: ""
	I0420 01:28:49.381384  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.381392  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:49.381397  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:49.381463  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:49.420099  142411 cri.go:89] found id: ""
	I0420 01:28:49.420143  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.420154  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:49.420162  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:49.420223  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:49.459810  142411 cri.go:89] found id: ""
	I0420 01:28:49.459843  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.459850  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:49.459859  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:49.459911  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:49.499776  142411 cri.go:89] found id: ""
	I0420 01:28:49.499808  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.499820  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:49.499828  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:49.499894  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:49.536115  142411 cri.go:89] found id: ""
	I0420 01:28:49.536147  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.536158  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:49.536169  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:49.536190  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:49.594665  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:49.594701  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:49.611896  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:49.611929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:49.689667  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:49.689685  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:49.689697  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:49.769061  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:49.769106  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:52.319299  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:52.336861  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:52.336934  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:52.380690  142411 cri.go:89] found id: ""
	I0420 01:28:52.380717  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.380725  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:52.380731  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:52.380781  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:52.429798  142411 cri.go:89] found id: ""
	I0420 01:28:52.429831  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.429843  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:52.429851  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:52.429915  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:52.474087  142411 cri.go:89] found id: ""
	I0420 01:28:52.474120  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.474130  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:52.474139  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:52.474204  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:52.514739  142411 cri.go:89] found id: ""
	I0420 01:28:52.514776  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.514789  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:52.514796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:52.514852  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:52.562100  142411 cri.go:89] found id: ""
	I0420 01:28:52.562195  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.562228  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:52.562236  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:52.562324  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:52.623266  142411 cri.go:89] found id: ""
	I0420 01:28:52.623301  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.623313  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:52.623321  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:52.623386  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:52.667788  142411 cri.go:89] found id: ""
	I0420 01:28:52.667818  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.667828  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:52.667838  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:52.667902  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:52.724607  142411 cri.go:89] found id: ""
	I0420 01:28:52.724636  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.724645  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:52.724654  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:52.724666  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:52.774798  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:52.774836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:52.833949  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:52.833989  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:52.851757  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:52.851787  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:52.939092  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:52.939119  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:52.939136  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:49.684112  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:52.182596  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:51.540918  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:54.039528  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:52.393913  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:54.892043  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:55.525807  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:55.540481  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:55.540557  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:55.584415  142411 cri.go:89] found id: ""
	I0420 01:28:55.584447  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.584458  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:55.584466  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:55.584538  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:55.623920  142411 cri.go:89] found id: ""
	I0420 01:28:55.623955  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.623965  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:55.623973  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:55.624037  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:55.667768  142411 cri.go:89] found id: ""
	I0420 01:28:55.667802  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.667810  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:55.667816  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:55.667889  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:55.708466  142411 cri.go:89] found id: ""
	I0420 01:28:55.708502  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.708513  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:55.708520  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:55.708600  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:55.748797  142411 cri.go:89] found id: ""
	I0420 01:28:55.748838  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.748849  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:55.748857  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:55.748919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:55.791714  142411 cri.go:89] found id: ""
	I0420 01:28:55.791743  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.791752  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:55.791761  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:55.791832  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:55.833836  142411 cri.go:89] found id: ""
	I0420 01:28:55.833862  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.833872  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:55.833879  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:55.833942  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:55.877425  142411 cri.go:89] found id: ""
	I0420 01:28:55.877462  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.877472  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:55.877484  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:55.877501  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:55.933237  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:55.933280  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:55.949507  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:55.949534  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:56.025596  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:56.025624  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:56.025641  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:56.105403  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:56.105439  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:54.683664  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.684401  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.040380  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.040834  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:00.040878  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.893067  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.894882  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.653368  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:58.669367  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:58.669429  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:58.712457  142411 cri.go:89] found id: ""
	I0420 01:28:58.712490  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.712501  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:58.712508  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:58.712574  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:58.750246  142411 cri.go:89] found id: ""
	I0420 01:28:58.750273  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.750281  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:58.750287  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:58.750351  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:58.793486  142411 cri.go:89] found id: ""
	I0420 01:28:58.793514  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.793522  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:58.793529  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:58.793595  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:58.839413  142411 cri.go:89] found id: ""
	I0420 01:28:58.839448  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.839461  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:58.839469  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:58.839537  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:58.881385  142411 cri.go:89] found id: ""
	I0420 01:28:58.881418  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.881430  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:58.881438  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:58.881509  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:58.923900  142411 cri.go:89] found id: ""
	I0420 01:28:58.923945  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.923965  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:58.923975  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:58.924038  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:58.962795  142411 cri.go:89] found id: ""
	I0420 01:28:58.962836  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.962848  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:58.962856  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:58.962919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:59.006309  142411 cri.go:89] found id: ""
	I0420 01:28:59.006341  142411 logs.go:276] 0 containers: []
	W0420 01:28:59.006350  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:59.006360  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:59.006372  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:59.062778  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:59.062819  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:59.078600  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:59.078630  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:59.159340  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:59.159361  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:59.159376  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:59.247257  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:59.247307  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:01.792687  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:01.808507  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:01.808588  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:01.851642  142411 cri.go:89] found id: ""
	I0420 01:29:01.851680  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.851691  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:01.851699  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:01.851765  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:01.891516  142411 cri.go:89] found id: ""
	I0420 01:29:01.891549  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.891560  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:01.891568  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:01.891640  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:01.934353  142411 cri.go:89] found id: ""
	I0420 01:29:01.934390  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.934402  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:01.934410  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:01.934479  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:01.972552  142411 cri.go:89] found id: ""
	I0420 01:29:01.972587  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.972599  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:01.972607  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:01.972711  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:02.012316  142411 cri.go:89] found id: ""
	I0420 01:29:02.012348  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.012360  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:02.012368  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:02.012423  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:02.056951  142411 cri.go:89] found id: ""
	I0420 01:29:02.056984  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.056994  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:02.057001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:02.057164  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:02.104061  142411 cri.go:89] found id: ""
	I0420 01:29:02.104091  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.104102  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:02.104110  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:02.104163  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:02.144085  142411 cri.go:89] found id: ""
	I0420 01:29:02.144114  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.144125  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:02.144137  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:02.144160  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:02.216560  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:02.216585  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:02.216598  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:02.307178  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:02.307222  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:02.349769  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:02.349798  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:02.401141  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:02.401176  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:59.185384  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:01.684462  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:03.685188  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:02.041060  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:04.540616  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:01.393943  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:03.894095  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:04.917513  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:04.934187  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:04.934266  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:04.970258  142411 cri.go:89] found id: ""
	I0420 01:29:04.970289  142411 logs.go:276] 0 containers: []
	W0420 01:29:04.970298  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:04.970304  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:04.970359  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:05.012853  142411 cri.go:89] found id: ""
	I0420 01:29:05.012883  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.012893  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:05.012899  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:05.012960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:05.054793  142411 cri.go:89] found id: ""
	I0420 01:29:05.054822  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.054833  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:05.054842  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:05.054910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:05.094637  142411 cri.go:89] found id: ""
	I0420 01:29:05.094674  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.094684  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:05.094701  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:05.094770  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:05.134874  142411 cri.go:89] found id: ""
	I0420 01:29:05.134903  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.134912  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:05.134918  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:05.134973  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:05.175637  142411 cri.go:89] found id: ""
	I0420 01:29:05.175668  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.175679  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:05.175687  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:05.175752  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:05.217809  142411 cri.go:89] found id: ""
	I0420 01:29:05.217847  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.217860  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:05.217867  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:05.217933  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:05.266884  142411 cri.go:89] found id: ""
	I0420 01:29:05.266917  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.266930  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:05.266941  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:05.266958  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:05.323765  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:05.323818  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:05.338524  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:05.338553  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:05.419860  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:05.419889  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:05.419906  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:05.506268  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:05.506311  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:08.055690  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:08.072692  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:08.072758  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:08.116247  142411 cri.go:89] found id: ""
	I0420 01:29:08.116287  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.116296  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:08.116304  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:08.116369  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:08.163152  142411 cri.go:89] found id: ""
	I0420 01:29:08.163177  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.163185  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:08.163190  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:08.163246  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:08.207330  142411 cri.go:89] found id: ""
	I0420 01:29:08.207357  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.207365  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:08.207371  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:08.207422  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:08.249833  142411 cri.go:89] found id: ""
	I0420 01:29:08.249864  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.249873  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:08.249879  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:08.249941  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:08.290834  142411 cri.go:89] found id: ""
	I0420 01:29:08.290867  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.290876  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:08.290883  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:08.290957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:08.333767  142411 cri.go:89] found id: ""
	I0420 01:29:08.333799  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.333809  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:08.333816  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:08.333888  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:08.381431  142411 cri.go:89] found id: ""
	I0420 01:29:08.381459  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.381468  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:08.381474  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:08.381532  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:06.183719  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.184829  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:06.544179  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:09.039956  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:06.394434  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.893184  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:10.897462  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.423702  142411 cri.go:89] found id: ""
	I0420 01:29:08.423727  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.423739  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:08.423751  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:08.423767  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:08.468422  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:08.468460  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:08.524091  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:08.524125  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:08.540294  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:08.540323  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:08.622439  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:08.622472  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:08.622488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:11.208472  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:11.225412  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:11.225479  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:11.273723  142411 cri.go:89] found id: ""
	I0420 01:29:11.273755  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.273767  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:11.273775  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:11.273840  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:11.316083  142411 cri.go:89] found id: ""
	I0420 01:29:11.316118  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.316130  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:11.316137  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:11.316203  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:11.355632  142411 cri.go:89] found id: ""
	I0420 01:29:11.355659  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.355668  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:11.355674  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:11.355734  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:11.397277  142411 cri.go:89] found id: ""
	I0420 01:29:11.397305  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.397327  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:11.397335  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:11.397399  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:11.439333  142411 cri.go:89] found id: ""
	I0420 01:29:11.439357  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.439366  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:11.439372  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:11.439433  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:11.477044  142411 cri.go:89] found id: ""
	I0420 01:29:11.477072  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.477079  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:11.477086  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:11.477142  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:11.516150  142411 cri.go:89] found id: ""
	I0420 01:29:11.516184  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.516196  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:11.516204  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:11.516274  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:11.557272  142411 cri.go:89] found id: ""
	I0420 01:29:11.557303  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.557331  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:11.557344  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:11.557366  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:11.652272  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:11.652319  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:11.700469  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:11.700504  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:11.756674  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:11.756711  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:11.772377  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:11.772407  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:11.851387  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:10.682669  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:12.684335  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:11.041282  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:13.541986  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:13.393346  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:15.394909  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:14.352257  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:14.367635  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:14.367714  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:14.408757  142411 cri.go:89] found id: ""
	I0420 01:29:14.408779  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.408788  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:14.408794  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:14.408843  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:14.455123  142411 cri.go:89] found id: ""
	I0420 01:29:14.455150  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.455159  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:14.455165  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:14.455239  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:14.499546  142411 cri.go:89] found id: ""
	I0420 01:29:14.499573  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.499581  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:14.499587  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:14.499635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:14.541811  142411 cri.go:89] found id: ""
	I0420 01:29:14.541841  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.541851  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:14.541859  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:14.541923  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:14.586965  142411 cri.go:89] found id: ""
	I0420 01:29:14.586990  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.587001  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:14.587008  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:14.587071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:14.625251  142411 cri.go:89] found id: ""
	I0420 01:29:14.625279  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.625288  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:14.625294  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:14.625377  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:14.665038  142411 cri.go:89] found id: ""
	I0420 01:29:14.665067  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.665079  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:14.665086  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:14.665157  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:14.706931  142411 cri.go:89] found id: ""
	I0420 01:29:14.706964  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.706978  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:14.706992  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:14.707044  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:14.761681  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:14.761717  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:14.776324  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:14.776350  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:14.856707  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:14.856727  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:14.856738  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:14.944019  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:14.944064  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:17.489112  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:17.507594  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:17.507660  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:17.556091  142411 cri.go:89] found id: ""
	I0420 01:29:17.556122  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.556132  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:17.556140  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:17.556205  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:17.600016  142411 cri.go:89] found id: ""
	I0420 01:29:17.600072  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.600086  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:17.600107  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:17.600171  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:17.643074  142411 cri.go:89] found id: ""
	I0420 01:29:17.643106  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.643118  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:17.643125  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:17.643190  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:17.684798  142411 cri.go:89] found id: ""
	I0420 01:29:17.684827  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.684838  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:17.684845  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:17.684910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:17.725451  142411 cri.go:89] found id: ""
	I0420 01:29:17.725481  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.725494  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:17.725503  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:17.725575  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:17.765918  142411 cri.go:89] found id: ""
	I0420 01:29:17.765944  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.765952  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:17.765959  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:17.766023  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:17.806011  142411 cri.go:89] found id: ""
	I0420 01:29:17.806038  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.806049  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:17.806056  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:17.806122  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:17.848409  142411 cri.go:89] found id: ""
	I0420 01:29:17.848441  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.848453  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:17.848465  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:17.848488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:17.903854  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:17.903900  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:17.919156  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:17.919191  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:18.008073  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:18.008115  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:18.008133  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:18.095887  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:18.095929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:14.687917  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:17.182326  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:16.039159  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:18.040487  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.540830  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:17.893270  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.392563  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.646919  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:20.664559  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:20.664635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:20.714440  142411 cri.go:89] found id: ""
	I0420 01:29:20.714472  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.714481  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:20.714487  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:20.714543  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:20.755249  142411 cri.go:89] found id: ""
	I0420 01:29:20.755276  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.755287  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:20.755294  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:20.755355  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:20.795744  142411 cri.go:89] found id: ""
	I0420 01:29:20.795777  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.795786  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:20.795797  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:20.795864  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:20.838083  142411 cri.go:89] found id: ""
	I0420 01:29:20.838111  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.838120  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:20.838128  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:20.838193  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:20.880198  142411 cri.go:89] found id: ""
	I0420 01:29:20.880227  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.880238  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:20.880245  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:20.880312  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:20.920496  142411 cri.go:89] found id: ""
	I0420 01:29:20.920522  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.920530  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:20.920536  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:20.920618  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:20.960137  142411 cri.go:89] found id: ""
	I0420 01:29:20.960170  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.960180  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:20.960186  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:20.960251  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:20.999583  142411 cri.go:89] found id: ""
	I0420 01:29:20.999624  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.999637  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:20.999649  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:20.999665  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:21.077439  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:21.077476  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:21.121104  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:21.121148  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:21.173871  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:21.173909  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:21.189767  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:21.189795  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:21.264715  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:19.682554  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:21.682995  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:22.543452  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:25.040875  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:22.393626  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:24.894279  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:23.765605  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:23.782250  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:23.782334  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:23.827248  142411 cri.go:89] found id: ""
	I0420 01:29:23.827277  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.827285  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:23.827291  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:23.827349  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:23.867610  142411 cri.go:89] found id: ""
	I0420 01:29:23.867636  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.867645  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:23.867651  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:23.867712  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:23.906244  142411 cri.go:89] found id: ""
	I0420 01:29:23.906271  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.906278  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:23.906283  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:23.906343  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:23.952256  142411 cri.go:89] found id: ""
	I0420 01:29:23.952288  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.952306  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:23.952314  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:23.952378  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:23.992843  142411 cri.go:89] found id: ""
	I0420 01:29:23.992879  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.992888  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:23.992896  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:23.992959  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:24.036460  142411 cri.go:89] found id: ""
	I0420 01:29:24.036493  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.036504  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:24.036512  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:24.036582  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:24.075910  142411 cri.go:89] found id: ""
	I0420 01:29:24.075944  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.075955  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:24.075962  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:24.076033  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:24.122638  142411 cri.go:89] found id: ""
	I0420 01:29:24.122676  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.122688  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:24.122698  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:24.122717  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:24.138022  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:24.138061  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:24.220977  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:24.220998  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:24.221012  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:24.302928  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:24.302972  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:24.351237  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:24.351277  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:26.910354  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:26.926815  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:26.926900  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:26.966123  142411 cri.go:89] found id: ""
	I0420 01:29:26.966155  142411 logs.go:276] 0 containers: []
	W0420 01:29:26.966165  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:26.966172  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:26.966246  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:27.011679  142411 cri.go:89] found id: ""
	I0420 01:29:27.011714  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.011727  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:27.011735  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:27.011806  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:27.052116  142411 cri.go:89] found id: ""
	I0420 01:29:27.052141  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.052148  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:27.052155  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:27.052202  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:27.090375  142411 cri.go:89] found id: ""
	I0420 01:29:27.090404  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.090413  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:27.090419  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:27.090476  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:27.131911  142411 cri.go:89] found id: ""
	I0420 01:29:27.131946  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.131957  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:27.131965  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:27.132033  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:27.176663  142411 cri.go:89] found id: ""
	I0420 01:29:27.176696  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.176714  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:27.176723  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:27.176788  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:27.217806  142411 cri.go:89] found id: ""
	I0420 01:29:27.217836  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.217846  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:27.217853  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:27.217917  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:27.253956  142411 cri.go:89] found id: ""
	I0420 01:29:27.253981  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.253989  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:27.253998  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:27.254014  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:27.298225  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:27.298264  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:27.351213  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:27.351259  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:27.366352  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:27.366388  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:27.466716  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:27.466742  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:27.466770  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:24.184743  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:26.681862  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:28.683193  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:27.042377  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:29.539413  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:27.395660  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:29.893947  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:30.050528  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:30.065697  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:30.065769  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:30.104643  142411 cri.go:89] found id: ""
	I0420 01:29:30.104675  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.104686  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:30.104694  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:30.104753  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:30.143864  142411 cri.go:89] found id: ""
	I0420 01:29:30.143892  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.143903  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:30.143910  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:30.143976  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:30.187925  142411 cri.go:89] found id: ""
	I0420 01:29:30.187954  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.187964  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:30.187972  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:30.188035  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:30.227968  142411 cri.go:89] found id: ""
	I0420 01:29:30.227995  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.228003  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:30.228009  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:30.228059  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:30.269550  142411 cri.go:89] found id: ""
	I0420 01:29:30.269584  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.269596  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:30.269604  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:30.269672  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:30.311777  142411 cri.go:89] found id: ""
	I0420 01:29:30.311810  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.311819  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:30.311827  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:30.311878  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:30.353569  142411 cri.go:89] found id: ""
	I0420 01:29:30.353601  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.353610  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:30.353617  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:30.353683  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:30.395003  142411 cri.go:89] found id: ""
	I0420 01:29:30.395032  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.395043  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:30.395054  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:30.395066  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:30.455495  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:30.455536  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:30.473749  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:30.473778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:30.555370  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:30.555397  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:30.555417  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:30.637079  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:30.637124  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:33.188917  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:33.203689  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:33.203757  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:33.246796  142411 cri.go:89] found id: ""
	I0420 01:29:33.246828  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.246840  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:33.246848  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:33.246911  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:33.284667  142411 cri.go:89] found id: ""
	I0420 01:29:33.284700  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.284712  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:33.284720  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:33.284782  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:33.328653  142411 cri.go:89] found id: ""
	I0420 01:29:33.328688  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.328701  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:33.328709  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:33.328777  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:33.369081  142411 cri.go:89] found id: ""
	I0420 01:29:33.369107  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.369121  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:33.369130  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:33.369180  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:30.684861  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:32.689885  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:31.547492  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:34.040445  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:31.894902  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:34.392071  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:33.414282  142411 cri.go:89] found id: ""
	I0420 01:29:33.414313  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.414322  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:33.414327  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:33.414411  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:33.457086  142411 cri.go:89] found id: ""
	I0420 01:29:33.457112  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.457119  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:33.457126  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:33.457176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:33.498686  142411 cri.go:89] found id: ""
	I0420 01:29:33.498716  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.498729  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:33.498738  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:33.498808  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:33.538872  142411 cri.go:89] found id: ""
	I0420 01:29:33.538907  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.538920  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:33.538932  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:33.538959  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:33.592586  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:33.592631  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:33.609200  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:33.609226  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:33.690795  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:33.690820  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:33.690836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:33.776092  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:33.776131  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:36.331256  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:36.348813  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:36.348892  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:36.397503  142411 cri.go:89] found id: ""
	I0420 01:29:36.397527  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.397534  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:36.397540  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:36.397603  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:36.439638  142411 cri.go:89] found id: ""
	I0420 01:29:36.439667  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.439675  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:36.439685  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:36.439761  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:36.477155  142411 cri.go:89] found id: ""
	I0420 01:29:36.477182  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.477194  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:36.477201  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:36.477259  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:36.533326  142411 cri.go:89] found id: ""
	I0420 01:29:36.533360  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.533373  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:36.533381  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:36.533446  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:36.573056  142411 cri.go:89] found id: ""
	I0420 01:29:36.573093  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.573107  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:36.573114  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:36.573177  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:36.611901  142411 cri.go:89] found id: ""
	I0420 01:29:36.611937  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.611949  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:36.611957  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:36.612017  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:36.656780  142411 cri.go:89] found id: ""
	I0420 01:29:36.656810  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.656823  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:36.656830  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:36.656899  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:36.699872  142411 cri.go:89] found id: ""
	I0420 01:29:36.699906  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.699916  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:36.699928  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:36.699943  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:36.758859  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:36.758895  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:36.775108  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:36.775145  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:36.858001  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:36.858027  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:36.858044  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:36.936114  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:36.936154  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:35.182481  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:37.182529  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:36.041125  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:38.043465  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:40.540023  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:36.395316  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:38.894062  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:40.894416  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:39.487167  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:39.502929  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:39.502995  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:39.547338  142411 cri.go:89] found id: ""
	I0420 01:29:39.547363  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.547371  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:39.547377  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:39.547430  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:39.608684  142411 cri.go:89] found id: ""
	I0420 01:29:39.608714  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.608722  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:39.608728  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:39.608793  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:39.679248  142411 cri.go:89] found id: ""
	I0420 01:29:39.679281  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.679292  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:39.679300  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:39.679361  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:39.725226  142411 cri.go:89] found id: ""
	I0420 01:29:39.725257  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.725270  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:39.725278  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:39.725363  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:39.767653  142411 cri.go:89] found id: ""
	I0420 01:29:39.767681  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.767690  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:39.767697  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:39.767760  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:39.807848  142411 cri.go:89] found id: ""
	I0420 01:29:39.807885  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.807893  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:39.807900  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:39.807968  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:39.847171  142411 cri.go:89] found id: ""
	I0420 01:29:39.847201  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.847212  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:39.847219  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:39.847284  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:39.884959  142411 cri.go:89] found id: ""
	I0420 01:29:39.884996  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.885007  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:39.885034  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:39.885050  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:39.959245  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:39.959269  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:39.959286  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:40.041394  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:40.041436  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:40.083125  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:40.083171  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:40.139902  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:40.139957  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:42.657038  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:42.673303  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:42.673407  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:42.717081  142411 cri.go:89] found id: ""
	I0420 01:29:42.717106  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.717114  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:42.717120  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:42.717170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:42.762322  142411 cri.go:89] found id: ""
	I0420 01:29:42.762357  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.762367  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:42.762375  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:42.762442  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:42.805059  142411 cri.go:89] found id: ""
	I0420 01:29:42.805112  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.805122  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:42.805131  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:42.805201  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:42.848539  142411 cri.go:89] found id: ""
	I0420 01:29:42.848568  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.848580  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:42.848587  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:42.848679  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:42.887915  142411 cri.go:89] found id: ""
	I0420 01:29:42.887949  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.887960  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:42.887967  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:42.888032  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:42.938832  142411 cri.go:89] found id: ""
	I0420 01:29:42.938867  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.938878  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:42.938888  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:42.938957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:42.982376  142411 cri.go:89] found id: ""
	I0420 01:29:42.982402  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.982409  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:42.982415  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:42.982477  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:43.023264  142411 cri.go:89] found id: ""
	I0420 01:29:43.023293  142411 logs.go:276] 0 containers: []
	W0420 01:29:43.023301  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:43.023313  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:43.023326  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:43.079673  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:43.079714  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:43.094753  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:43.094786  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:43.180113  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:43.180149  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:43.180177  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:43.259830  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:43.259872  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:39.182568  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:41.186805  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:43.683131  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:42.540687  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.039857  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:43.392948  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.394081  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.802515  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:45.816908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:45.816965  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:45.861091  142411 cri.go:89] found id: ""
	I0420 01:29:45.861123  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.861132  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:45.861138  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:45.861224  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:45.901677  142411 cri.go:89] found id: ""
	I0420 01:29:45.901702  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.901710  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:45.901716  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:45.901767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:45.938301  142411 cri.go:89] found id: ""
	I0420 01:29:45.938325  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.938334  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:45.938339  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:45.938393  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:45.978432  142411 cri.go:89] found id: ""
	I0420 01:29:45.978460  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.978473  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:45.978479  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:45.978537  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:46.019410  142411 cri.go:89] found id: ""
	I0420 01:29:46.019446  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.019455  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:46.019461  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:46.019524  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:46.071002  142411 cri.go:89] found id: ""
	I0420 01:29:46.071032  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.071041  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:46.071052  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:46.071124  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:46.110362  142411 cri.go:89] found id: ""
	I0420 01:29:46.110391  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.110402  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:46.110409  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:46.110477  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:46.152276  142411 cri.go:89] found id: ""
	I0420 01:29:46.152311  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.152322  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:46.152334  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:46.152351  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:46.205121  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:46.205159  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:46.221808  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:46.221842  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:46.300394  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:46.300418  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:46.300434  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:46.391961  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:46.392002  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:45.684038  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:48.176081  141927 pod_ready.go:81] duration metric: took 4m0.00056563s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" ...
	E0420 01:29:48.176112  141927 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:29:48.176130  141927 pod_ready.go:38] duration metric: took 4m7.024291569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:29:48.176166  141927 kubeadm.go:591] duration metric: took 4m16.819079549s to restartPrimaryControlPlane
	W0420 01:29:48.176256  141927 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:29:48.176291  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:29:47.040255  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:49.043956  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:47.893875  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:49.894291  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:48.945086  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:48.961414  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:48.961491  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:49.010230  142411 cri.go:89] found id: ""
	I0420 01:29:49.010285  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.010299  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:49.010309  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:49.010385  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:49.054455  142411 cri.go:89] found id: ""
	I0420 01:29:49.054481  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.054491  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:49.054499  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:49.054566  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:49.094536  142411 cri.go:89] found id: ""
	I0420 01:29:49.094562  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.094572  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:49.094580  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:49.094740  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:49.134004  142411 cri.go:89] found id: ""
	I0420 01:29:49.134035  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.134046  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:49.134054  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:49.134118  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:49.173697  142411 cri.go:89] found id: ""
	I0420 01:29:49.173728  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.173741  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:49.173750  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:49.173817  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:49.215655  142411 cri.go:89] found id: ""
	I0420 01:29:49.215681  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.215689  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:49.215695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:49.215745  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:49.258282  142411 cri.go:89] found id: ""
	I0420 01:29:49.258312  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.258324  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:49.258332  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:49.258394  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:49.298565  142411 cri.go:89] found id: ""
	I0420 01:29:49.298597  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.298608  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:49.298620  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:49.298638  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:49.378833  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:49.378862  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:49.378880  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:49.467477  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:49.467517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:49.521747  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:49.521788  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:49.583386  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:49.583436  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:52.102969  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:52.122971  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:52.123053  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:52.166166  142411 cri.go:89] found id: ""
	I0420 01:29:52.166199  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.166210  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:52.166219  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:52.166287  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:52.206790  142411 cri.go:89] found id: ""
	I0420 01:29:52.206817  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.206824  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:52.206830  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:52.206889  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:52.249879  142411 cri.go:89] found id: ""
	I0420 01:29:52.249911  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.249921  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:52.249931  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:52.249997  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:52.293953  142411 cri.go:89] found id: ""
	I0420 01:29:52.293997  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.294009  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:52.294018  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:52.294095  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:52.339447  142411 cri.go:89] found id: ""
	I0420 01:29:52.339478  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.339490  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:52.339497  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:52.339558  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:52.378383  142411 cri.go:89] found id: ""
	I0420 01:29:52.378416  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.378428  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:52.378435  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:52.378488  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:52.423079  142411 cri.go:89] found id: ""
	I0420 01:29:52.423121  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.423130  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:52.423137  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:52.423205  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:52.459525  142411 cri.go:89] found id: ""
	I0420 01:29:52.459559  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.459572  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:52.459594  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:52.459610  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:52.567141  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:52.567186  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:52.618194  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:52.618235  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:52.681921  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:52.681959  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:52.699065  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:52.699108  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:52.776829  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:51.540922  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:54.043224  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:52.397218  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:54.895147  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:55.277933  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:55.293380  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:55.293455  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:55.337443  142411 cri.go:89] found id: ""
	I0420 01:29:55.337475  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.337483  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:55.337491  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:55.337557  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:55.375911  142411 cri.go:89] found id: ""
	I0420 01:29:55.375942  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.375951  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:55.375957  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:55.376022  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:55.418545  142411 cri.go:89] found id: ""
	I0420 01:29:55.418569  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.418577  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:55.418583  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:55.418635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:55.459343  142411 cri.go:89] found id: ""
	I0420 01:29:55.459378  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.459390  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:55.459397  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:55.459452  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:55.503851  142411 cri.go:89] found id: ""
	I0420 01:29:55.503878  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.503887  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:55.503895  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:55.503959  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:55.542533  142411 cri.go:89] found id: ""
	I0420 01:29:55.542556  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.542562  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:55.542568  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:55.542623  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:55.582205  142411 cri.go:89] found id: ""
	I0420 01:29:55.582236  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.582246  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:55.582252  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:55.582314  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:55.624727  142411 cri.go:89] found id: ""
	I0420 01:29:55.624757  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.624769  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:55.624781  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:55.624803  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:55.675403  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:55.675438  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:55.691492  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:55.691516  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:55.772283  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:55.772313  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:55.772330  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:55.859440  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:55.859477  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:56.543221  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:59.041874  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:57.393723  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:59.894390  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:58.406009  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:58.422305  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:58.422382  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:58.468206  142411 cri.go:89] found id: ""
	I0420 01:29:58.468303  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.468321  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:58.468329  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:58.468402  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:58.513981  142411 cri.go:89] found id: ""
	I0420 01:29:58.514018  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.514027  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:58.514041  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:58.514105  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:58.559967  142411 cri.go:89] found id: ""
	I0420 01:29:58.560000  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.560011  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:58.560019  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:58.560084  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:58.600710  142411 cri.go:89] found id: ""
	I0420 01:29:58.600744  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.600763  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:58.600771  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:58.600834  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:58.645995  142411 cri.go:89] found id: ""
	I0420 01:29:58.646022  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.646030  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:58.646036  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:58.646097  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:58.684930  142411 cri.go:89] found id: ""
	I0420 01:29:58.684957  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.684965  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:58.684972  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:58.685022  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:58.727225  142411 cri.go:89] found id: ""
	I0420 01:29:58.727251  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.727259  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:58.727265  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:58.727319  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:58.765244  142411 cri.go:89] found id: ""
	I0420 01:29:58.765282  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.765293  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:58.765303  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:58.765330  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:58.817791  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:58.817822  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:58.832882  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:58.832926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:58.919297  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:58.919325  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:58.919342  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:59.002590  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:59.002637  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:01.551854  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:01.568974  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:01.569054  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:01.609165  142411 cri.go:89] found id: ""
	I0420 01:30:01.609191  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.609200  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:01.609206  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:01.609272  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:01.653349  142411 cri.go:89] found id: ""
	I0420 01:30:01.653383  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.653396  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:01.653405  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:01.653482  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:01.698961  142411 cri.go:89] found id: ""
	I0420 01:30:01.698991  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.699002  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:01.699009  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:01.699063  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:01.739230  142411 cri.go:89] found id: ""
	I0420 01:30:01.739271  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.739283  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:01.739292  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:01.739376  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:01.781839  142411 cri.go:89] found id: ""
	I0420 01:30:01.781873  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.781885  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:01.781893  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:01.781960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:01.821212  142411 cri.go:89] found id: ""
	I0420 01:30:01.821241  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.821252  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:01.821259  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:01.821339  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:01.859959  142411 cri.go:89] found id: ""
	I0420 01:30:01.859984  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.859993  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:01.859999  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:01.860060  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:01.898832  142411 cri.go:89] found id: ""
	I0420 01:30:01.898858  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.898865  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:01.898875  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:01.898886  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:01.943065  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:01.943156  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:01.995618  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:01.995654  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:02.010489  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:02.010517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:02.090181  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:02.090222  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:02.090238  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:01.541135  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.041977  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:02.394456  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.894450  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.671376  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:04.687535  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:04.687629  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:04.728732  142411 cri.go:89] found id: ""
	I0420 01:30:04.728765  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.728778  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:04.728786  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:04.728854  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:04.768537  142411 cri.go:89] found id: ""
	I0420 01:30:04.768583  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.768602  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:04.768610  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:04.768676  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:04.811714  142411 cri.go:89] found id: ""
	I0420 01:30:04.811741  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.811750  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:04.811756  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:04.811816  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:04.852324  142411 cri.go:89] found id: ""
	I0420 01:30:04.852360  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.852371  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:04.852379  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:04.852452  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:04.891657  142411 cri.go:89] found id: ""
	I0420 01:30:04.891688  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.891700  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:04.891708  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:04.891774  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:04.933192  142411 cri.go:89] found id: ""
	I0420 01:30:04.933222  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.933230  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:04.933236  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:04.933291  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:04.972796  142411 cri.go:89] found id: ""
	I0420 01:30:04.972819  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.972828  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:04.972834  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:04.972888  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:05.014782  142411 cri.go:89] found id: ""
	I0420 01:30:05.014821  142411 logs.go:276] 0 containers: []
	W0420 01:30:05.014833  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:05.014846  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:05.014862  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:05.067438  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:05.067470  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:05.121336  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:05.121371  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:05.137495  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:05.137529  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:05.214132  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:05.214153  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:05.214170  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:07.796964  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:07.810856  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:07.810917  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:07.846993  142411 cri.go:89] found id: ""
	I0420 01:30:07.847024  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.847033  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:07.847040  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:07.847089  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:07.886422  142411 cri.go:89] found id: ""
	I0420 01:30:07.886452  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.886464  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:07.886474  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:07.886567  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:07.942200  142411 cri.go:89] found id: ""
	I0420 01:30:07.942230  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.942238  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:07.942245  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:07.942296  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:07.980179  142411 cri.go:89] found id: ""
	I0420 01:30:07.980215  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.980226  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:07.980235  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:07.980299  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:08.020097  142411 cri.go:89] found id: ""
	I0420 01:30:08.020130  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.020140  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:08.020145  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:08.020215  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:08.063793  142411 cri.go:89] found id: ""
	I0420 01:30:08.063837  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.063848  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:08.063857  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:08.063930  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:08.108674  142411 cri.go:89] found id: ""
	I0420 01:30:08.108705  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.108716  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:08.108724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:08.108798  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:08.147467  142411 cri.go:89] found id: ""
	I0420 01:30:08.147495  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.147503  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:08.147512  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:08.147525  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:08.239416  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:08.239466  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:08.294639  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:08.294669  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:08.349753  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:08.349795  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:08.368971  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:08.369003  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 01:30:06.540958  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:08.541701  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:06.898857  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:09.397590  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	W0420 01:30:08.449996  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:10.950318  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:10.964969  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:10.965032  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:11.006321  142411 cri.go:89] found id: ""
	I0420 01:30:11.006354  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.006365  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:11.006375  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:11.006437  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:11.047982  142411 cri.go:89] found id: ""
	I0420 01:30:11.048010  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.048019  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:11.048025  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:11.048073  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:11.089185  142411 cri.go:89] found id: ""
	I0420 01:30:11.089217  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.089226  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:11.089232  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:11.089287  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:11.131293  142411 cri.go:89] found id: ""
	I0420 01:30:11.131322  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.131335  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:11.131344  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:11.131398  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:11.170394  142411 cri.go:89] found id: ""
	I0420 01:30:11.170419  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.170427  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:11.170432  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:11.170485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:11.210580  142411 cri.go:89] found id: ""
	I0420 01:30:11.210619  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.210631  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:11.210640  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:11.210706  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:11.251938  142411 cri.go:89] found id: ""
	I0420 01:30:11.251977  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.251990  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:11.251998  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:11.252064  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:11.295999  142411 cri.go:89] found id: ""
	I0420 01:30:11.296033  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.296045  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:11.296057  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:11.296072  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:11.378564  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:11.378632  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:11.422836  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:11.422868  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:11.475893  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:11.475928  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:11.491524  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:11.491555  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:11.569066  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:11.041078  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:13.540339  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:15.541762  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:11.893724  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:14.394206  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:14.886464  142057 pod_ready.go:81] duration metric: took 4m0.00077804s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" ...
	E0420 01:30:14.886500  142057 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:30:14.886528  142057 pod_ready.go:38] duration metric: took 4m14.554070758s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:14.886572  142057 kubeadm.go:591] duration metric: took 4m22.173690393s to restartPrimaryControlPlane
	W0420 01:30:14.886657  142057 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:30:14.886691  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:30:14.070158  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:14.086000  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:14.086067  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:14.128864  142411 cri.go:89] found id: ""
	I0420 01:30:14.128894  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.128906  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:14.128914  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:14.128986  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:14.169447  142411 cri.go:89] found id: ""
	I0420 01:30:14.169482  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.169497  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:14.169506  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:14.169583  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:14.210007  142411 cri.go:89] found id: ""
	I0420 01:30:14.210043  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.210054  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:14.210062  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:14.210119  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:14.247652  142411 cri.go:89] found id: ""
	I0420 01:30:14.247685  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.247695  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:14.247703  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:14.247764  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:14.290788  142411 cri.go:89] found id: ""
	I0420 01:30:14.290820  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.290830  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:14.290847  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:14.290908  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:14.351514  142411 cri.go:89] found id: ""
	I0420 01:30:14.351548  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.351570  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:14.351581  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:14.351637  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:14.423481  142411 cri.go:89] found id: ""
	I0420 01:30:14.423520  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.423534  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:14.423543  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:14.423615  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:14.465597  142411 cri.go:89] found id: ""
	I0420 01:30:14.465622  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.465630  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:14.465639  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:14.465655  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:14.522669  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:14.522705  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:14.541258  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:14.541293  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:14.618657  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:14.618678  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:14.618691  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:14.702616  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:14.702658  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:17.256212  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:17.277171  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:17.277250  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:17.321548  142411 cri.go:89] found id: ""
	I0420 01:30:17.321582  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.321600  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:17.321607  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:17.321676  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:17.362856  142411 cri.go:89] found id: ""
	I0420 01:30:17.362883  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.362890  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:17.362896  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:17.362966  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:17.409494  142411 cri.go:89] found id: ""
	I0420 01:30:17.409525  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.409539  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:17.409548  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:17.409631  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:17.447759  142411 cri.go:89] found id: ""
	I0420 01:30:17.447801  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.447812  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:17.447819  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:17.447885  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:17.498416  142411 cri.go:89] found id: ""
	I0420 01:30:17.498444  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.498454  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:17.498460  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:17.498528  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:17.546025  142411 cri.go:89] found id: ""
	I0420 01:30:17.546055  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.546064  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:17.546072  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:17.546138  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:17.585797  142411 cri.go:89] found id: ""
	I0420 01:30:17.585829  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.585840  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:17.585848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:17.585919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:17.630850  142411 cri.go:89] found id: ""
	I0420 01:30:17.630886  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.630899  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:17.630911  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:17.630926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:17.689472  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:17.689510  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:17.705603  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:17.705642  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:17.794094  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:17.794137  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:17.794155  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:17.879397  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:17.879435  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:18.041437  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:20.044174  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:20.428142  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:20.444936  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:20.445018  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:20.487317  142411 cri.go:89] found id: ""
	I0420 01:30:20.487354  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.487365  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:20.487373  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:20.487443  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:20.537209  142411 cri.go:89] found id: ""
	I0420 01:30:20.537241  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.537254  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:20.537262  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:20.537348  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:20.584311  142411 cri.go:89] found id: ""
	I0420 01:30:20.584343  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.584352  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:20.584357  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:20.584413  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:20.631915  142411 cri.go:89] found id: ""
	I0420 01:30:20.631948  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.631959  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:20.631969  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:20.632040  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:20.679680  142411 cri.go:89] found id: ""
	I0420 01:30:20.679707  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.679716  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:20.679721  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:20.679770  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:20.724967  142411 cri.go:89] found id: ""
	I0420 01:30:20.725002  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.725013  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:20.725027  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:20.725091  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:20.772717  142411 cri.go:89] found id: ""
	I0420 01:30:20.772751  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.772762  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:20.772771  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:20.772837  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:20.812421  142411 cri.go:89] found id: ""
	I0420 01:30:20.812449  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.812460  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:20.812471  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:20.812485  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:20.870522  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:20.870554  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:20.886764  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:20.886793  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:20.963941  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:20.963964  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:20.963979  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:21.045738  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:21.045778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:20.850989  141927 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.674674204s)
	I0420 01:30:20.851082  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:20.868537  141927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:20.880284  141927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:20.891650  141927 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:20.891672  141927 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:20.891726  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0420 01:30:20.902443  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:20.902509  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:20.913476  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0420 01:30:20.923762  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:20.923836  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:20.934281  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0420 01:30:20.944194  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:20.944254  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:20.955506  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0420 01:30:20.968039  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:20.968107  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:20.978918  141927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:21.214688  141927 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:30:22.539778  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:24.543547  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:23.600037  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:23.616539  142411 kubeadm.go:591] duration metric: took 4m4.142686832s to restartPrimaryControlPlane
	W0420 01:30:23.616641  142411 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:30:23.616676  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:30:25.481285  142411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.864573977s)
	I0420 01:30:25.481385  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:25.500950  142411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:25.518624  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:25.532506  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:25.532531  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:25.532584  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:30:25.546634  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:25.546708  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:25.561379  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:30:25.575506  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:25.575627  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:25.590615  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:30:25.604855  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:25.604923  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:25.619717  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:30:25.634525  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:25.634607  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:25.649408  142411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:25.735636  142411 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:30:25.735697  142411 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:25.913199  142411 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:25.913347  142411 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:25.913483  142411 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:26.120240  142411 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:26.122066  142411 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:26.122169  142411 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:26.122279  142411 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:26.122395  142411 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:26.122499  142411 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:26.122623  142411 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:26.122715  142411 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:26.122806  142411 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:26.122898  142411 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:26.122999  142411 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:26.123113  142411 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:26.123173  142411 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:26.123244  142411 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:26.243908  142411 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:26.354349  142411 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:26.605778  142411 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:26.833914  142411 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:26.855348  142411 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:26.857029  142411 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:26.857250  142411 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:27.010707  142411 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:27.012314  142411 out.go:204]   - Booting up control plane ...
	I0420 01:30:27.012456  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:27.036284  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:27.049123  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:27.050561  142411 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:27.053222  142411 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:30:30.213456  141927 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:30:30.213557  141927 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:30.213687  141927 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:30.213826  141927 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:30.213915  141927 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:30.213978  141927 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:30.215501  141927 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:30.215594  141927 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:30.215667  141927 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:30.215802  141927 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:30.215886  141927 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:30.215960  141927 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:30.216018  141927 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:30.216097  141927 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:30.216156  141927 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:30.216258  141927 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:30.216350  141927 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:30.216385  141927 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:30.216447  141927 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:30.216517  141927 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:30.216589  141927 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:30:30.216653  141927 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:30.216743  141927 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:30.216832  141927 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:30.216933  141927 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:30.217019  141927 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:30.218228  141927 out.go:204]   - Booting up control plane ...
	I0420 01:30:30.218341  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:30.218446  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:30.218516  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:30.218615  141927 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:30.218703  141927 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:30.218753  141927 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:30.218904  141927 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:30:30.218975  141927 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:30:30.219027  141927 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001925972s
	I0420 01:30:30.219128  141927 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:30:30.219216  141927 kubeadm.go:309] [api-check] The API server is healthy after 5.502367015s
	I0420 01:30:30.219336  141927 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:30:30.219504  141927 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:30:30.219576  141927 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:30:30.219816  141927 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-907988 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:30:30.219880  141927 kubeadm.go:309] [bootstrap-token] Using token: ozlrl4.y5r3psi4bnl35gso
	I0420 01:30:30.221283  141927 out.go:204]   - Configuring RBAC rules ...
	I0420 01:30:30.221416  141927 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:30:30.221533  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:30:30.221728  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:30:30.221968  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:30:30.222146  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:30:30.222255  141927 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:30:30.222385  141927 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:30:30.222455  141927 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:30:30.222524  141927 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:30:30.222534  141927 kubeadm.go:309] 
	I0420 01:30:30.222614  141927 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:30:30.222628  141927 kubeadm.go:309] 
	I0420 01:30:30.222692  141927 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:30:30.222699  141927 kubeadm.go:309] 
	I0420 01:30:30.222723  141927 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:30:30.222772  141927 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:30:30.222815  141927 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:30:30.222821  141927 kubeadm.go:309] 
	I0420 01:30:30.222878  141927 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:30:30.222885  141927 kubeadm.go:309] 
	I0420 01:30:30.222923  141927 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:30:30.222929  141927 kubeadm.go:309] 
	I0420 01:30:30.222994  141927 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:30:30.223100  141927 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:30:30.223171  141927 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:30:30.223189  141927 kubeadm.go:309] 
	I0420 01:30:30.223281  141927 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:30:30.223346  141927 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:30:30.223354  141927 kubeadm.go:309] 
	I0420 01:30:30.223423  141927 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token ozlrl4.y5r3psi4bnl35gso \
	I0420 01:30:30.223527  141927 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:30:30.223552  141927 kubeadm.go:309] 	--control-plane 
	I0420 01:30:30.223559  141927 kubeadm.go:309] 
	I0420 01:30:30.223627  141927 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:30:30.223635  141927 kubeadm.go:309] 
	I0420 01:30:30.223704  141927 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token ozlrl4.y5r3psi4bnl35gso \
	I0420 01:30:30.223811  141927 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:30:30.223826  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:30:30.223833  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:30:30.225184  141927 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:30:27.041383  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:29.540967  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:30.226237  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:30:30.241388  141927 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:30:30.274356  141927 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:30:30.274469  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:30.274503  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-907988 minikube.k8s.io/updated_at=2024_04_20T01_30_30_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=default-k8s-diff-port-907988 minikube.k8s.io/primary=true
	I0420 01:30:30.319402  141927 ops.go:34] apiserver oom_adj: -16
	I0420 01:30:30.505362  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:31.006101  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:31.505679  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.005947  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.505747  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:33.005919  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:33.505449  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:34.006029  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.040710  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:34.541175  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:34.505846  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:35.006187  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:35.505618  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:36.005994  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:36.506217  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.006428  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.506359  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:38.006018  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:38.505454  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:39.006426  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.041157  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:39.542266  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:39.506227  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:40.005941  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:40.506123  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:41.006198  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:41.506244  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:42.006045  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:42.505458  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:43.006082  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:43.122481  141927 kubeadm.go:1107] duration metric: took 12.84807935s to wait for elevateKubeSystemPrivileges
	W0420 01:30:43.122525  141927 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:30:43.122535  141927 kubeadm.go:393] duration metric: took 5m11.83456536s to StartCluster
	I0420 01:30:43.122559  141927 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:30:43.122689  141927 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:30:43.124746  141927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:30:43.125059  141927 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:30:43.126572  141927 out.go:177] * Verifying Kubernetes components...
	I0420 01:30:43.125129  141927 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:30:43.125301  141927 config.go:182] Loaded profile config "default-k8s-diff-port-907988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:30:43.128187  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:30:43.128231  141927 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-907988"
	I0420 01:30:43.128240  141927 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-907988"
	I0420 01:30:43.128277  141927 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-907988"
	I0420 01:30:43.128278  141927 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-907988"
	W0420 01:30:43.128288  141927 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:30:43.128302  141927 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-907988"
	I0420 01:30:43.128352  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.128769  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.128795  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.128840  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.128800  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.128306  141927 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-907988"
	W0420 01:30:43.128994  141927 addons.go:243] addon metrics-server should already be in state true
	I0420 01:30:43.129026  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.129378  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.129401  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.148251  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41797
	I0420 01:30:43.148272  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39865
	I0420 01:30:43.148503  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33785
	I0420 01:30:43.148959  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.148985  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.149060  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.149605  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149626  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.149683  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149688  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149698  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.149706  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.150105  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150108  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150106  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150358  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.150703  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.150733  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.150760  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.150798  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.154242  141927 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-907988"
	W0420 01:30:43.154266  141927 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:30:43.154300  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.154673  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.154715  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.167283  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46477
	I0420 01:30:43.167925  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.168475  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.168496  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.168868  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.169094  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.171067  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45101
	I0420 01:30:43.171384  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.173102  141927 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:30:43.171760  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.172823  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I0420 01:30:43.174639  141927 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:30:43.174661  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:30:43.174681  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.174859  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.175307  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.175331  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.175460  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.175476  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.175799  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.175992  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.176361  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.176376  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.176686  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.178744  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.178848  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.180048  141927 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:30:43.179462  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.181257  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:30:43.181275  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:30:43.181289  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.181296  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.179641  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.182168  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.182437  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.182627  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.184562  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.184958  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.184985  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.185241  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.185430  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.185621  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.185771  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.195778  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I0420 01:30:43.196419  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.196979  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.197002  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.197763  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.198072  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.200177  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.200480  141927 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:30:43.200497  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:30:43.200516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.204078  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.204128  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.204154  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.204178  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.204275  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.204456  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.204582  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.375731  141927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:30:43.424911  141927 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-907988" to be "Ready" ...
	I0420 01:30:43.436729  141927 node_ready.go:49] node "default-k8s-diff-port-907988" has status "Ready":"True"
	I0420 01:30:43.436750  141927 node_ready.go:38] duration metric: took 11.810027ms for node "default-k8s-diff-port-907988" to be "Ready" ...
	I0420 01:30:43.436759  141927 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:43.445452  141927 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:43.497224  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:30:43.526236  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:30:43.527573  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:30:43.527597  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:30:43.591844  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:30:43.591872  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:30:43.655692  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:30:43.655721  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:30:43.824523  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:30:44.808651  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.311370016s)
	I0420 01:30:44.808721  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.808724  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.282444767s)
	I0420 01:30:44.808735  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.808767  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.808783  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809052  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809066  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809074  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.809081  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809144  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809162  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809170  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.809179  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809626  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809635  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809647  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809655  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809626  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Closing plugin on server side
	I0420 01:30:44.833935  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.833963  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.834326  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.834348  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.316084  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.491512905s)
	I0420 01:30:45.316157  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:45.316177  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:45.316514  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:45.316539  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.316593  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:45.316610  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:45.316910  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:45.316989  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.317007  141927 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-907988"
	I0420 01:30:45.316906  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Closing plugin on server side
	I0420 01:30:45.319289  141927 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0420 01:30:42.040865  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:44.042663  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:45.320468  141927 addons.go:505] duration metric: took 2.195343987s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0420 01:30:45.453717  141927 pod_ready.go:102] pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:45.952010  141927 pod_ready.go:92] pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.952032  141927 pod_ready.go:81] duration metric: took 2.506556645s for pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.952040  141927 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.957512  141927 pod_ready.go:92] pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.957533  141927 pod_ready.go:81] duration metric: took 5.486362ms for pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.957541  141927 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.962790  141927 pod_ready.go:92] pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.962810  141927 pod_ready.go:81] duration metric: took 5.261485ms for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.962821  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.968720  141927 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.968743  141927 pod_ready.go:81] duration metric: took 5.914425ms for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.968754  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.976930  141927 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.976946  141927 pod_ready.go:81] duration metric: took 8.183898ms for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.976954  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jt8wr" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.350179  141927 pod_ready.go:92] pod "kube-proxy-jt8wr" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:46.350203  141927 pod_ready.go:81] duration metric: took 373.241134ms for pod "kube-proxy-jt8wr" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.350212  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.749542  141927 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:46.749566  141927 pod_ready.go:81] duration metric: took 399.34726ms for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.749573  141927 pod_ready.go:38] duration metric: took 3.312805349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:46.749587  141927 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:30:46.749647  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:46.785318  141927 api_server.go:72] duration metric: took 3.660207577s to wait for apiserver process to appear ...
	I0420 01:30:46.785349  141927 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:30:46.785373  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:30:46.793933  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 200:
	ok
	I0420 01:30:46.794890  141927 api_server.go:141] control plane version: v1.30.0
	I0420 01:30:46.794911  141927 api_server.go:131] duration metric: took 9.555146ms to wait for apiserver health ...
	I0420 01:30:46.794920  141927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:30:46.953036  141927 system_pods.go:59] 9 kube-system pods found
	I0420 01:30:46.953066  141927 system_pods.go:61] "coredns-7db6d8ff4d-g2nzn" [d07ba546-0251-4862-ad1b-0c3d5ee7b1f3] Running
	I0420 01:30:46.953070  141927 system_pods.go:61] "coredns-7db6d8ff4d-p8dhp" [4bf589b6-f54b-4615-b95e-b95c89766e24] Running
	I0420 01:30:46.953074  141927 system_pods.go:61] "etcd-default-k8s-diff-port-907988" [f2711b7c-9d31-4586-bcf0-345ef2c9e62a] Running
	I0420 01:30:46.953077  141927 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-907988" [7a4fccc8-90d5-4467-8925-df5d8e1e128a] Running
	I0420 01:30:46.953081  141927 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-907988" [68350b12-3244-4565-ab06-6d7ad5876935] Running
	I0420 01:30:46.953085  141927 system_pods.go:61] "kube-proxy-jt8wr" [a9ddf3ce-29f8-437d-bd31-89411c135012] Running
	I0420 01:30:46.953088  141927 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-907988" [f0ff044b-0c2a-4105-9373-34abfbf6b68a] Running
	I0420 01:30:46.953094  141927 system_pods.go:61] "metrics-server-569cc877fc-6rgpj" [70cba472-11c4-4604-a4ad-3575ccedf005] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:30:46.953098  141927 system_pods.go:61] "storage-provisioner" [739478ce-5d74-4be0-8a39-d80245d8aa8a] Running
	I0420 01:30:46.953108  141927 system_pods.go:74] duration metric: took 158.182751ms to wait for pod list to return data ...
	I0420 01:30:46.953116  141927 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:30:47.151205  141927 default_sa.go:45] found service account: "default"
	I0420 01:30:47.151245  141927 default_sa.go:55] duration metric: took 198.121475ms for default service account to be created ...
	I0420 01:30:47.151274  141927 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:30:47.354321  141927 system_pods.go:86] 9 kube-system pods found
	I0420 01:30:47.354348  141927 system_pods.go:89] "coredns-7db6d8ff4d-g2nzn" [d07ba546-0251-4862-ad1b-0c3d5ee7b1f3] Running
	I0420 01:30:47.354353  141927 system_pods.go:89] "coredns-7db6d8ff4d-p8dhp" [4bf589b6-f54b-4615-b95e-b95c89766e24] Running
	I0420 01:30:47.354358  141927 system_pods.go:89] "etcd-default-k8s-diff-port-907988" [f2711b7c-9d31-4586-bcf0-345ef2c9e62a] Running
	I0420 01:30:47.354364  141927 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-907988" [7a4fccc8-90d5-4467-8925-df5d8e1e128a] Running
	I0420 01:30:47.354369  141927 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-907988" [68350b12-3244-4565-ab06-6d7ad5876935] Running
	I0420 01:30:47.354373  141927 system_pods.go:89] "kube-proxy-jt8wr" [a9ddf3ce-29f8-437d-bd31-89411c135012] Running
	I0420 01:30:47.354376  141927 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-907988" [f0ff044b-0c2a-4105-9373-34abfbf6b68a] Running
	I0420 01:30:47.354383  141927 system_pods.go:89] "metrics-server-569cc877fc-6rgpj" [70cba472-11c4-4604-a4ad-3575ccedf005] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:30:47.354387  141927 system_pods.go:89] "storage-provisioner" [739478ce-5d74-4be0-8a39-d80245d8aa8a] Running
	I0420 01:30:47.354395  141927 system_pods.go:126] duration metric: took 203.115923ms to wait for k8s-apps to be running ...
	I0420 01:30:47.354403  141927 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:30:47.354452  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:47.370946  141927 system_svc.go:56] duration metric: took 16.532953ms WaitForService to wait for kubelet
	I0420 01:30:47.370977  141927 kubeadm.go:576] duration metric: took 4.245884115s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:30:47.370997  141927 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:30:47.550097  141927 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:30:47.550127  141927 node_conditions.go:123] node cpu capacity is 2
	I0420 01:30:47.550138  141927 node_conditions.go:105] duration metric: took 179.136105ms to run NodePressure ...
	I0420 01:30:47.550150  141927 start.go:240] waiting for startup goroutines ...
	I0420 01:30:47.550156  141927 start.go:245] waiting for cluster config update ...
	I0420 01:30:47.550167  141927 start.go:254] writing updated cluster config ...
	I0420 01:30:47.550493  141927 ssh_runner.go:195] Run: rm -f paused
	I0420 01:30:47.614715  141927 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:30:47.616658  141927 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-907988" cluster and "default" namespace by default
	I0420 01:30:47.623645  142057 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.736926697s)
	I0420 01:30:47.623716  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:47.648132  142057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:47.662521  142057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:47.674241  142057 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:47.674265  142057 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:47.674311  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:30:47.684981  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:47.685037  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:47.696549  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:30:47.706838  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:47.706885  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:47.717387  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:30:47.732194  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:47.732252  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:47.743425  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:30:47.756579  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:47.756629  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:47.769210  142057 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:47.832909  142057 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:30:47.832972  142057 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:47.987090  142057 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:47.987209  142057 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:47.987380  142057 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:48.253287  142057 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:48.255451  142057 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:48.255552  142057 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:48.255657  142057 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:48.255767  142057 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:48.255880  142057 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:48.255992  142057 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:48.256076  142057 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:48.256170  142057 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:48.256250  142057 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:48.256344  142057 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:48.256445  142057 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:48.256500  142057 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:48.256563  142057 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:48.346357  142057 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:48.602240  142057 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:30:48.741597  142057 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:49.086311  142057 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:49.284340  142057 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:49.284671  142057 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:49.287663  142057 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:46.540199  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:48.540848  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:50.541579  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:49.289305  142057 out.go:204]   - Booting up control plane ...
	I0420 01:30:49.289430  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:49.289558  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:49.289646  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:49.309520  142057 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:49.311328  142057 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:49.311389  142057 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:49.448766  142057 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:30:49.448889  142057 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:30:49.950225  142057 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.460713ms
	I0420 01:30:49.950316  142057 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:30:55.452587  142057 kubeadm.go:309] [api-check] The API server is healthy after 5.502061843s
	I0420 01:30:55.466768  142057 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:30:55.500892  142057 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:30:55.538376  142057 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:30:55.538631  142057 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-269507 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:30:55.559344  142057 kubeadm.go:309] [bootstrap-token] Using token: jtn2hn.nnhc9vssv65463xy
	I0420 01:30:52.542748  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:55.040878  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:55.560872  142057 out.go:204]   - Configuring RBAC rules ...
	I0420 01:30:55.561022  142057 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:30:55.575617  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:30:55.583307  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:30:55.586398  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:30:55.596138  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:30:55.599717  142057 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:30:55.861367  142057 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:30:56.310991  142057 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:30:56.860904  142057 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:30:56.860939  142057 kubeadm.go:309] 
	I0420 01:30:56.861051  142057 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:30:56.861077  142057 kubeadm.go:309] 
	I0420 01:30:56.861180  142057 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:30:56.861201  142057 kubeadm.go:309] 
	I0420 01:30:56.861232  142057 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:30:56.861345  142057 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:30:56.861438  142057 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:30:56.861454  142057 kubeadm.go:309] 
	I0420 01:30:56.861534  142057 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:30:56.861544  142057 kubeadm.go:309] 
	I0420 01:30:56.861628  142057 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:30:56.861644  142057 kubeadm.go:309] 
	I0420 01:30:56.861728  142057 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:30:56.861822  142057 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:30:56.861895  142057 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:30:56.861923  142057 kubeadm.go:309] 
	I0420 01:30:56.862120  142057 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:30:56.862228  142057 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:30:56.862246  142057 kubeadm.go:309] 
	I0420 01:30:56.862371  142057 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jtn2hn.nnhc9vssv65463xy \
	I0420 01:30:56.862532  142057 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:30:56.862571  142057 kubeadm.go:309] 	--control-plane 
	I0420 01:30:56.862580  142057 kubeadm.go:309] 
	I0420 01:30:56.862700  142057 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:30:56.862724  142057 kubeadm.go:309] 
	I0420 01:30:56.862827  142057 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jtn2hn.nnhc9vssv65463xy \
	I0420 01:30:56.862955  142057 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:30:56.863259  142057 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:30:56.863343  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:30:56.863358  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:30:56.865193  142057 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:30:57.541555  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:00.040222  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:56.866515  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:30:56.880013  142057 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:30:56.900677  142057 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:30:56.900773  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:56.900809  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-269507 minikube.k8s.io/updated_at=2024_04_20T01_30_56_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=embed-certs-269507 minikube.k8s.io/primary=true
	I0420 01:30:56.942362  142057 ops.go:34] apiserver oom_adj: -16
	I0420 01:30:57.124807  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:57.625201  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:58.125867  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:58.625845  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:59.124923  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:59.625004  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:00.125467  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:00.625081  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:01.125446  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.539751  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:04.540090  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:01.625279  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.125084  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.625048  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:03.125567  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:03.625428  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:04.125592  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:04.625874  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:05.125031  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:05.625698  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:06.125620  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.054009  142411 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:31:07.054375  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:07.054708  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:06.625682  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.125909  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.625563  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:08.125451  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:08.625265  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.125677  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.625433  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.720318  142057 kubeadm.go:1107] duration metric: took 12.81961115s to wait for elevateKubeSystemPrivileges
	W0420 01:31:09.720362  142057 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:31:09.720373  142057 kubeadm.go:393] duration metric: took 5m17.067399347s to StartCluster
	I0420 01:31:09.720426  142057 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:31:09.720552  142057 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:31:09.722646  142057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:31:09.722904  142057 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:31:09.724771  142057 out.go:177] * Verifying Kubernetes components...
	I0420 01:31:09.722979  142057 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:31:09.723175  142057 config.go:182] Loaded profile config "embed-certs-269507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:31:09.724863  142057 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-269507"
	I0420 01:31:09.726208  142057 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-269507"
	W0420 01:31:09.726229  142057 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:31:09.724870  142057 addons.go:69] Setting default-storageclass=true in profile "embed-certs-269507"
	I0420 01:31:09.726270  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.726289  142057 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-269507"
	I0420 01:31:09.724889  142057 addons.go:69] Setting metrics-server=true in profile "embed-certs-269507"
	I0420 01:31:09.726351  142057 addons.go:234] Setting addon metrics-server=true in "embed-certs-269507"
	W0420 01:31:09.726365  142057 addons.go:243] addon metrics-server should already be in state true
	I0420 01:31:09.726395  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.726159  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:31:09.726699  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726737  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726771  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726785  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.726803  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.726793  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.742932  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41221
	I0420 01:31:09.743143  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I0420 01:31:09.743375  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.743666  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.743951  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.743968  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.744102  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.744120  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.744439  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.744497  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.745152  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.745162  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.745178  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.745195  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.745923  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40633
	I0420 01:31:09.746441  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.747173  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.747202  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.747637  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.747934  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.751736  142057 addons.go:234] Setting addon default-storageclass=true in "embed-certs-269507"
	W0420 01:31:09.751760  142057 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:31:09.751791  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.752174  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.752199  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.763296  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40627
	I0420 01:31:09.763475  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41617
	I0420 01:31:09.764103  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.764119  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.764635  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.764656  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.764807  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.764821  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.765353  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.765369  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.765562  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.766352  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.767675  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.769455  142057 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:31:09.768866  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.770529  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:31:09.770596  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:31:09.770618  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.771959  142057 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:31:07.039635  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:09.040381  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:09.772109  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
	I0420 01:31:09.773531  142057 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:31:09.773545  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:31:09.773560  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.773989  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.774697  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.774711  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.774889  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.775069  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.775522  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.775550  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.775770  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.775840  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.775855  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.775973  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.776144  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.776283  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.776967  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.777306  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.777376  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.777621  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.777811  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.777949  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.778092  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.791609  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37301
	I0420 01:31:09.792008  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.792475  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.792492  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.792811  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.793110  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.794743  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.795008  142057 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:31:09.795023  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:31:09.795037  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.797655  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.798120  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.798144  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.798394  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.798603  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.798745  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.798888  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.957088  142057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:31:10.012344  142057 node_ready.go:35] waiting up to 6m0s for node "embed-certs-269507" to be "Ready" ...
	I0420 01:31:10.023887  142057 node_ready.go:49] node "embed-certs-269507" has status "Ready":"True"
	I0420 01:31:10.023917  142057 node_ready.go:38] duration metric: took 11.536403ms for node "embed-certs-269507" to be "Ready" ...
	I0420 01:31:10.023929  142057 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:10.035096  142057 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:10.210022  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:31:10.222715  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:31:10.251807  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:31:10.251836  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:31:10.342638  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:31:10.342664  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:31:10.480676  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:31:10.480700  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:31:10.655186  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:31:11.331066  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.121005107s)
	I0420 01:31:11.331125  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.108375538s)
	I0420 01:31:11.331139  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331152  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331165  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331181  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331530  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331601  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331611  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331641  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331664  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331681  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331684  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331692  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331699  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331646  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331932  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331959  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331979  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331991  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331989  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.332003  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.364269  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.364296  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.364641  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.364667  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.364671  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.809229  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.154002194s)
	I0420 01:31:11.809282  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.809301  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.809618  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.809676  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.809688  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.809705  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.809717  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.809954  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.809983  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.810001  142057 addons.go:470] Verifying addon metrics-server=true in "embed-certs-269507"
	I0420 01:31:11.810004  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.811610  142057 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0420 01:31:12.055506  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:12.055793  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:11.813049  142057 addons.go:505] duration metric: took 2.090078148s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0420 01:31:12.044618  142057 pod_ready.go:102] pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:12.565519  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.565543  142057 pod_ready.go:81] duration metric: took 2.530392572s for pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.565552  142057 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.577986  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.578011  142057 pod_ready.go:81] duration metric: took 12.452506ms for pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.578020  142057 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.595104  142057 pod_ready.go:92] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.595129  142057 pod_ready.go:81] duration metric: took 17.103577ms for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.595139  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.602502  142057 pod_ready.go:92] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.602524  142057 pod_ready.go:81] duration metric: took 7.377832ms for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.602538  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.608443  142057 pod_ready.go:92] pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.608462  142057 pod_ready.go:81] duration metric: took 5.916781ms for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.608471  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4x66x" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.939418  142057 pod_ready.go:92] pod "kube-proxy-4x66x" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.939444  142057 pod_ready.go:81] duration metric: took 330.966964ms for pod "kube-proxy-4x66x" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.939454  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:13.341528  142057 pod_ready.go:92] pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:13.341556  142057 pod_ready.go:81] duration metric: took 402.093841ms for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:13.341565  142057 pod_ready.go:38] duration metric: took 3.317622631s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:13.341583  142057 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:31:13.341648  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:31:13.361938  142057 api_server.go:72] duration metric: took 3.638999445s to wait for apiserver process to appear ...
	I0420 01:31:13.361967  142057 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:31:13.361987  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:31:13.367149  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0420 01:31:13.368215  142057 api_server.go:141] control plane version: v1.30.0
	I0420 01:31:13.368243  142057 api_server.go:131] duration metric: took 6.268859ms to wait for apiserver health ...
	I0420 01:31:13.368254  142057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:31:13.545177  142057 system_pods.go:59] 9 kube-system pods found
	I0420 01:31:13.545203  142057 system_pods.go:61] "coredns-7db6d8ff4d-ltzhp" [fca2da30-b908-46fc-a028-d43a17c6307e] Running
	I0420 01:31:13.545207  142057 system_pods.go:61] "coredns-7db6d8ff4d-mpf5l" [331105fe-dd08-409f-9b2d-658b958cd1a2] Running
	I0420 01:31:13.545212  142057 system_pods.go:61] "etcd-embed-certs-269507" [7dc38a73-8614-42d0-afb5-f2ffdbb8ef1b] Running
	I0420 01:31:13.545215  142057 system_pods.go:61] "kube-apiserver-embed-certs-269507" [c6741448-01ad-4be4-a120-c69b27fbc818] Running
	I0420 01:31:13.545219  142057 system_pods.go:61] "kube-controller-manager-embed-certs-269507" [003fc040-4032-4ff8-99af-71305dae664c] Running
	I0420 01:31:13.545222  142057 system_pods.go:61] "kube-proxy-4x66x" [75da8306-56f8-49bf-a2e7-cf5d4877dc16] Running
	I0420 01:31:13.545224  142057 system_pods.go:61] "kube-scheduler-embed-certs-269507" [86a64ec5-dd53-4702-9dea-8dbab58b38e3] Running
	I0420 01:31:13.545230  142057 system_pods.go:61] "metrics-server-569cc877fc-jwbst" [4d13a078-f3cd-43c2-8f15-fe5c36445294] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:31:13.545233  142057 system_pods.go:61] "storage-provisioner" [8eee97ab-bb31-4a3d-be80-845b6545e897] Running
	I0420 01:31:13.545242  142057 system_pods.go:74] duration metric: took 176.980813ms to wait for pod list to return data ...
	I0420 01:31:13.545249  142057 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:31:13.739865  142057 default_sa.go:45] found service account: "default"
	I0420 01:31:13.739892  142057 default_sa.go:55] duration metric: took 194.636223ms for default service account to be created ...
	I0420 01:31:13.739903  142057 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:31:13.942758  142057 system_pods.go:86] 9 kube-system pods found
	I0420 01:31:13.942785  142057 system_pods.go:89] "coredns-7db6d8ff4d-ltzhp" [fca2da30-b908-46fc-a028-d43a17c6307e] Running
	I0420 01:31:13.942793  142057 system_pods.go:89] "coredns-7db6d8ff4d-mpf5l" [331105fe-dd08-409f-9b2d-658b958cd1a2] Running
	I0420 01:31:13.942801  142057 system_pods.go:89] "etcd-embed-certs-269507" [7dc38a73-8614-42d0-afb5-f2ffdbb8ef1b] Running
	I0420 01:31:13.942812  142057 system_pods.go:89] "kube-apiserver-embed-certs-269507" [c6741448-01ad-4be4-a120-c69b27fbc818] Running
	I0420 01:31:13.942819  142057 system_pods.go:89] "kube-controller-manager-embed-certs-269507" [003fc040-4032-4ff8-99af-71305dae664c] Running
	I0420 01:31:13.942829  142057 system_pods.go:89] "kube-proxy-4x66x" [75da8306-56f8-49bf-a2e7-cf5d4877dc16] Running
	I0420 01:31:13.942835  142057 system_pods.go:89] "kube-scheduler-embed-certs-269507" [86a64ec5-dd53-4702-9dea-8dbab58b38e3] Running
	I0420 01:31:13.942846  142057 system_pods.go:89] "metrics-server-569cc877fc-jwbst" [4d13a078-f3cd-43c2-8f15-fe5c36445294] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:31:13.942854  142057 system_pods.go:89] "storage-provisioner" [8eee97ab-bb31-4a3d-be80-845b6545e897] Running
	I0420 01:31:13.942863  142057 system_pods.go:126] duration metric: took 202.954629ms to wait for k8s-apps to be running ...
	I0420 01:31:13.942873  142057 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:31:13.942926  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:31:13.962754  142057 system_svc.go:56] duration metric: took 19.872903ms WaitForService to wait for kubelet
	I0420 01:31:13.962781  142057 kubeadm.go:576] duration metric: took 4.239850872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:31:13.962802  142057 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:31:14.139800  142057 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:31:14.139834  142057 node_conditions.go:123] node cpu capacity is 2
	I0420 01:31:14.139848  142057 node_conditions.go:105] duration metric: took 177.041675ms to run NodePressure ...
	I0420 01:31:14.139862  142057 start.go:240] waiting for startup goroutines ...
	I0420 01:31:14.139872  142057 start.go:245] waiting for cluster config update ...
	I0420 01:31:14.139886  142057 start.go:254] writing updated cluster config ...
	I0420 01:31:14.140201  142057 ssh_runner.go:195] Run: rm -f paused
	I0420 01:31:14.190985  142057 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:31:14.193207  142057 out.go:177] * Done! kubectl is now configured to use "embed-certs-269507" cluster and "default" namespace by default
	I0420 01:31:11.040724  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:13.043491  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:15.540182  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:17.540894  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:19.541858  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:22.056094  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:22.056315  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:22.039484  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:24.043137  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:26.043262  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:28.540379  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:30.540568  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:32.543371  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:35.040187  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:37.541354  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:40.039779  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:42.057024  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:42.057278  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:42.040147  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:44.540170  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:46.540576  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:48.543604  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:51.034230  141746 pod_ready.go:81] duration metric: took 4m0.001077028s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" ...
	E0420 01:31:51.034258  141746 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:31:51.034280  141746 pod_ready.go:38] duration metric: took 4m12.046687249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:51.034308  141746 kubeadm.go:591] duration metric: took 4m55.947094434s to restartPrimaryControlPlane
	W0420 01:31:51.034367  141746 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:31:51.034400  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:32:22.058965  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:32:22.059213  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:32:22.059231  142411 kubeadm.go:309] 
	I0420 01:32:22.059284  142411 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:32:22.059341  142411 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:32:22.059351  142411 kubeadm.go:309] 
	I0420 01:32:22.059398  142411 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:32:22.059449  142411 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:32:22.059581  142411 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:32:22.059606  142411 kubeadm.go:309] 
	I0420 01:32:22.059693  142411 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:32:22.059725  142411 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:32:22.059796  142411 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:32:22.059821  142411 kubeadm.go:309] 
	I0420 01:32:22.059916  142411 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:32:22.060046  142411 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:32:22.060068  142411 kubeadm.go:309] 
	I0420 01:32:22.060225  142411 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:32:22.060371  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:32:22.060498  142411 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:32:22.060624  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:32:22.060643  142411 kubeadm.go:309] 
	I0420 01:32:22.061155  142411 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:22.061294  142411 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:32:22.061403  142411 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0420 01:32:22.061569  142411 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0420 01:32:22.061628  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:32:23.211059  142411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.149398853s)
	I0420 01:32:23.211147  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:23.228140  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:32:23.240832  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:32:23.240868  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:32:23.240912  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:32:23.252674  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:32:23.252735  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:32:23.264128  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:32:23.274998  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:32:23.275059  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:32:23.286449  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.297377  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:32:23.297452  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.308971  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:32:23.320775  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:32:23.320842  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:32:23.333601  142411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:32:23.490252  141746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.455825605s)
	I0420 01:32:23.490330  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:23.515027  141746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:32:23.528835  141746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:32:23.542901  141746 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:32:23.542927  141746 kubeadm.go:156] found existing configuration files:
	
	I0420 01:32:23.542969  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:32:23.554931  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:32:23.555006  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:32:23.570665  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:32:23.583505  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:32:23.583576  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:32:23.595835  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.607468  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:32:23.607538  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.620629  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:32:23.634141  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:32:23.634222  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:32:23.648360  141746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:32:23.727697  141746 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:32:23.727825  141746 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:32:23.899280  141746 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:32:23.899376  141746 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:32:23.899456  141746 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:32:24.139299  141746 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:32:24.141410  141746 out.go:204]   - Generating certificates and keys ...
	I0420 01:32:24.141522  141746 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:32:24.141618  141746 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:32:24.141719  141746 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:32:24.141814  141746 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:32:24.141912  141746 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:32:24.141987  141746 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:32:24.142076  141746 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:32:24.142172  141746 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:32:24.142348  141746 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:32:24.142589  141746 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:32:24.142757  141746 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:32:24.142990  141746 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:32:24.247270  141746 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:32:24.326535  141746 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:32:24.538489  141746 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:32:24.594810  141746 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:32:24.712812  141746 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:32:24.713304  141746 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:32:24.719376  141746 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:32:24.721510  141746 out.go:204]   - Booting up control plane ...
	I0420 01:32:24.721649  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:32:24.721781  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:32:24.722470  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:32:24.748410  141746 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:32:24.750247  141746 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:32:24.750320  141746 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:32:24.906734  141746 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:32:24.906859  141746 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:32:25.409625  141746 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.844847ms
	I0420 01:32:25.409771  141746 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:32:23.603058  142411 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:30.912062  141746 kubeadm.go:309] [api-check] The API server is healthy after 5.502434175s
	I0420 01:32:30.935231  141746 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:32:30.954860  141746 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:32:30.990255  141746 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:32:30.990480  141746 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-338118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:32:31.004218  141746 kubeadm.go:309] [bootstrap-token] Using token: 6ub3et.0wyu42zodual4kt8
	I0420 01:32:31.005771  141746 out.go:204]   - Configuring RBAC rules ...
	I0420 01:32:31.005875  141746 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:32:31.011978  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:32:31.020750  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:32:31.024958  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:32:31.032499  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:32:31.037128  141746 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:32:31.320324  141746 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:32:31.761773  141746 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:32:32.322540  141746 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:32:32.322563  141746 kubeadm.go:309] 
	I0420 01:32:32.322633  141746 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:32:32.322648  141746 kubeadm.go:309] 
	I0420 01:32:32.322728  141746 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:32:32.322737  141746 kubeadm.go:309] 
	I0420 01:32:32.322763  141746 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:32:32.322833  141746 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:32:32.322906  141746 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:32:32.322918  141746 kubeadm.go:309] 
	I0420 01:32:32.323005  141746 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:32:32.323015  141746 kubeadm.go:309] 
	I0420 01:32:32.323083  141746 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:32:32.323110  141746 kubeadm.go:309] 
	I0420 01:32:32.323184  141746 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:32:32.323304  141746 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:32:32.323362  141746 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:32:32.323372  141746 kubeadm.go:309] 
	I0420 01:32:32.323522  141746 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:32:32.323660  141746 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:32:32.323677  141746 kubeadm.go:309] 
	I0420 01:32:32.323765  141746 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6ub3et.0wyu42zodual4kt8 \
	I0420 01:32:32.323916  141746 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:32:32.323948  141746 kubeadm.go:309] 	--control-plane 
	I0420 01:32:32.323957  141746 kubeadm.go:309] 
	I0420 01:32:32.324035  141746 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:32:32.324049  141746 kubeadm.go:309] 
	I0420 01:32:32.324201  141746 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6ub3et.0wyu42zodual4kt8 \
	I0420 01:32:32.324348  141746 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:32:32.324967  141746 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:32.325210  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:32:32.325228  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:32:32.327624  141746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:32:32.329029  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:32:32.344181  141746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:32:32.368978  141746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:32:32.369052  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:32.369086  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-338118 minikube.k8s.io/updated_at=2024_04_20T01_32_32_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=no-preload-338118 minikube.k8s.io/primary=true
	I0420 01:32:32.579160  141746 ops.go:34] apiserver oom_adj: -16
	I0420 01:32:32.579218  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:33.079458  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:33.579498  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:34.079957  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:34.579520  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:35.079902  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:35.579955  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:36.079525  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:36.579612  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:37.079831  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:37.579989  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:38.079481  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:38.579798  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:39.080239  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:39.579654  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:40.080267  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:40.579837  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:41.079840  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:41.579347  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:42.079368  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:42.579641  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:43.079257  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:43.579647  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.079317  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.580002  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.698993  141746 kubeadm.go:1107] duration metric: took 12.330007154s to wait for elevateKubeSystemPrivileges
	W0420 01:32:44.699036  141746 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:32:44.699045  141746 kubeadm.go:393] duration metric: took 5m49.674421659s to StartCluster
	I0420 01:32:44.699064  141746 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:32:44.699166  141746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:32:44.700731  141746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:32:44.700982  141746 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:32:44.702752  141746 out.go:177] * Verifying Kubernetes components...
	I0420 01:32:44.701040  141746 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:32:44.701201  141746 config.go:182] Loaded profile config "no-preload-338118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:32:44.704065  141746 addons.go:69] Setting storage-provisioner=true in profile "no-preload-338118"
	I0420 01:32:44.704078  141746 addons.go:69] Setting metrics-server=true in profile "no-preload-338118"
	I0420 01:32:44.704077  141746 addons.go:69] Setting default-storageclass=true in profile "no-preload-338118"
	I0420 01:32:44.704099  141746 addons.go:234] Setting addon storage-provisioner=true in "no-preload-338118"
	W0420 01:32:44.704105  141746 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:32:44.704114  141746 addons.go:234] Setting addon metrics-server=true in "no-preload-338118"
	I0420 01:32:44.704113  141746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-338118"
	W0420 01:32:44.704124  141746 addons.go:243] addon metrics-server should already be in state true
	I0420 01:32:44.704151  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.704157  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.704069  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:32:44.704452  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704485  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.704503  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704521  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704535  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.704545  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.720663  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34001
	I0420 01:32:44.720685  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0420 01:32:44.721210  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.721222  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.721746  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.721766  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.721901  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.721925  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.722282  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.722311  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.722860  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.722860  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.722889  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.722914  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.723194  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39919
	I0420 01:32:44.723775  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.724401  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.724427  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.724790  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.724975  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.728728  141746 addons.go:234] Setting addon default-storageclass=true in "no-preload-338118"
	W0420 01:32:44.728751  141746 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:32:44.728780  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.729136  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.729161  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.738505  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37139
	I0420 01:32:44.738893  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.739388  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.739409  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.739916  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.740120  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.741929  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37217
	I0420 01:32:44.742090  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.744131  141746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:32:44.742538  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.745561  141746 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:32:44.745579  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:32:44.745597  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.744662  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.745640  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.745994  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.746345  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.747491  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0420 01:32:44.747878  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.748594  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.748731  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.748752  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.750445  141746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:32:44.749050  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.749380  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.749990  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.752010  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:32:44.752029  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:32:44.752046  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.752131  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.752155  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.752307  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.752479  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.752647  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.752676  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.752676  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.754727  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.755188  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.755216  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.755497  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.755696  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.755866  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.756034  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.768442  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I0420 01:32:44.768887  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.769453  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.769473  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.769852  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.770359  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.772155  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.772443  141746 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:32:44.772651  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:32:44.772686  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.775775  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.776177  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.776205  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.776313  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.776492  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.776667  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.776832  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.930301  141746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:32:44.948472  141746 node_ready.go:35] waiting up to 6m0s for node "no-preload-338118" to be "Ready" ...
	I0420 01:32:44.960637  141746 node_ready.go:49] node "no-preload-338118" has status "Ready":"True"
	I0420 01:32:44.960664  141746 node_ready.go:38] duration metric: took 12.15407ms for node "no-preload-338118" to be "Ready" ...
	I0420 01:32:44.960676  141746 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:32:44.971143  141746 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.980894  141746 pod_ready.go:92] pod "etcd-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:44.980917  141746 pod_ready.go:81] duration metric: took 9.749994ms for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.980929  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.995192  141746 pod_ready.go:92] pod "kube-apiserver-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:44.995217  141746 pod_ready.go:81] duration metric: took 14.279681ms for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.995229  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.004302  141746 pod_ready.go:92] pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:45.004324  141746 pod_ready.go:81] duration metric: took 9.086713ms for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.004338  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f57d9" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.062482  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:32:45.066314  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:32:45.066334  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:32:45.093830  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:32:45.148558  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:32:45.148600  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:32:45.235321  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:32:45.235349  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:32:45.275661  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:32:46.686292  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.592425062s)
	I0420 01:32:46.686344  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.623774979s)
	I0420 01:32:46.686360  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686375  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686385  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686401  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686822  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.686897  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.686911  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686920  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686835  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.686839  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687001  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.687013  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.687027  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686850  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.687153  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687166  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.687359  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687373  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.697988  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.422274698s)
	I0420 01:32:46.698045  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.698059  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.698320  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.698339  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.698351  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.698359  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.698568  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.698658  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.698676  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.698687  141746 addons.go:470] Verifying addon metrics-server=true in "no-preload-338118"
	I0420 01:32:46.733170  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.733198  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.733551  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.733573  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.733605  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.735297  141746 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0420 01:32:46.736665  141746 addons.go:505] duration metric: took 2.035625149s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0420 01:32:47.011271  141746 pod_ready.go:92] pod "kube-proxy-f57d9" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:47.011299  141746 pod_ready.go:81] duration metric: took 2.006954798s for pod "kube-proxy-f57d9" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.011309  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.025378  141746 pod_ready.go:92] pod "kube-scheduler-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:47.025408  141746 pod_ready.go:81] duration metric: took 14.090474ms for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.025421  141746 pod_ready.go:38] duration metric: took 2.064731781s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:32:47.025443  141746 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:32:47.025511  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:32:47.052680  141746 api_server.go:72] duration metric: took 2.351656586s to wait for apiserver process to appear ...
	I0420 01:32:47.052712  141746 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:32:47.052738  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:32:47.061908  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 200:
	ok
	I0420 01:32:47.065615  141746 api_server.go:141] control plane version: v1.30.0
	I0420 01:32:47.065641  141746 api_server.go:131] duration metric: took 12.920384ms to wait for apiserver health ...
	I0420 01:32:47.065651  141746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:32:47.158039  141746 system_pods.go:59] 9 kube-system pods found
	I0420 01:32:47.158076  141746 system_pods.go:61] "coredns-7db6d8ff4d-8jvsz" [d83784a0-6942-4906-ba66-76d7fa25dc04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.158087  141746 system_pods.go:61] "coredns-7db6d8ff4d-lhnxg" [c0fb3119-abcb-4646-9aae-a54438a76adf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.158096  141746 system_pods.go:61] "etcd-no-preload-338118" [1ff1cf84-276b-45c4-9da9-8266ee15a4f6] Running
	I0420 01:32:47.158101  141746 system_pods.go:61] "kube-apiserver-no-preload-338118" [313150c1-d21e-43d5-8ae0-6331e5007a66] Running
	I0420 01:32:47.158107  141746 system_pods.go:61] "kube-controller-manager-no-preload-338118" [eef34e56-ed71-4e76-a732-341878f3f90d] Running
	I0420 01:32:47.158113  141746 system_pods.go:61] "kube-proxy-f57d9" [54252f52-9bb1-48a2-98e1-980f40fa727d] Running
	I0420 01:32:47.158117  141746 system_pods.go:61] "kube-scheduler-no-preload-338118" [4491c2f0-7b45-4c78-b91e-8fcbbcc890fd] Running
	I0420 01:32:47.158126  141746 system_pods.go:61] "metrics-server-569cc877fc-xbwdm" [798c7b61-a93d-4daf-a832-e15056a2ae24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:32:47.158134  141746 system_pods.go:61] "storage-provisioner" [51c12418-805f-4923-b7ab-4fa0fe07ec9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:32:47.158147  141746 system_pods.go:74] duration metric: took 92.489697ms to wait for pod list to return data ...
	I0420 01:32:47.158162  141746 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:32:47.351962  141746 default_sa.go:45] found service account: "default"
	I0420 01:32:47.352002  141746 default_sa.go:55] duration metric: took 193.830142ms for default service account to be created ...
	I0420 01:32:47.352016  141746 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:32:47.557471  141746 system_pods.go:86] 9 kube-system pods found
	I0420 01:32:47.557511  141746 system_pods.go:89] "coredns-7db6d8ff4d-8jvsz" [d83784a0-6942-4906-ba66-76d7fa25dc04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.557524  141746 system_pods.go:89] "coredns-7db6d8ff4d-lhnxg" [c0fb3119-abcb-4646-9aae-a54438a76adf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.557534  141746 system_pods.go:89] "etcd-no-preload-338118" [1ff1cf84-276b-45c4-9da9-8266ee15a4f6] Running
	I0420 01:32:47.557540  141746 system_pods.go:89] "kube-apiserver-no-preload-338118" [313150c1-d21e-43d5-8ae0-6331e5007a66] Running
	I0420 01:32:47.557547  141746 system_pods.go:89] "kube-controller-manager-no-preload-338118" [eef34e56-ed71-4e76-a732-341878f3f90d] Running
	I0420 01:32:47.557554  141746 system_pods.go:89] "kube-proxy-f57d9" [54252f52-9bb1-48a2-98e1-980f40fa727d] Running
	I0420 01:32:47.557564  141746 system_pods.go:89] "kube-scheduler-no-preload-338118" [4491c2f0-7b45-4c78-b91e-8fcbbcc890fd] Running
	I0420 01:32:47.557577  141746 system_pods.go:89] "metrics-server-569cc877fc-xbwdm" [798c7b61-a93d-4daf-a832-e15056a2ae24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:32:47.557589  141746 system_pods.go:89] "storage-provisioner" [51c12418-805f-4923-b7ab-4fa0fe07ec9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:32:47.557602  141746 system_pods.go:126] duration metric: took 205.577946ms to wait for k8s-apps to be running ...
	I0420 01:32:47.557615  141746 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:32:47.557674  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:47.577745  141746 system_svc.go:56] duration metric: took 20.111982ms WaitForService to wait for kubelet
	I0420 01:32:47.577774  141746 kubeadm.go:576] duration metric: took 2.876759476s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:32:47.577794  141746 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:32:47.753216  141746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:32:47.753246  141746 node_conditions.go:123] node cpu capacity is 2
	I0420 01:32:47.753257  141746 node_conditions.go:105] duration metric: took 175.457668ms to run NodePressure ...
	I0420 01:32:47.753269  141746 start.go:240] waiting for startup goroutines ...
	I0420 01:32:47.753275  141746 start.go:245] waiting for cluster config update ...
	I0420 01:32:47.753286  141746 start.go:254] writing updated cluster config ...
	I0420 01:32:47.753612  141746 ssh_runner.go:195] Run: rm -f paused
	I0420 01:32:47.804681  141746 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:32:47.806823  141746 out.go:177] * Done! kubectl is now configured to use "no-preload-338118" cluster and "default" namespace by default
	I0420 01:34:20.028550  142411 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:34:20.028769  142411 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0420 01:34:20.030749  142411 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:34:20.030826  142411 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:34:20.030947  142411 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:34:20.031078  142411 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:34:20.031217  142411 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:34:20.031319  142411 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:34:20.032927  142411 out.go:204]   - Generating certificates and keys ...
	I0420 01:34:20.033024  142411 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:34:20.033110  142411 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:34:20.033211  142411 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:34:20.033286  142411 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:34:20.033410  142411 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:34:20.033496  142411 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:34:20.033597  142411 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:34:20.033695  142411 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:34:20.033805  142411 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:34:20.033921  142411 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:34:20.033972  142411 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:34:20.034042  142411 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:34:20.034125  142411 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:34:20.034200  142411 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:34:20.034287  142411 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:34:20.034355  142411 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:34:20.034510  142411 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:34:20.034614  142411 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:34:20.034680  142411 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:34:20.034760  142411 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:34:20.036300  142411 out.go:204]   - Booting up control plane ...
	I0420 01:34:20.036380  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:34:20.036479  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:34:20.036583  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:34:20.036705  142411 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:34:20.036888  142411 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:34:20.036955  142411 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:34:20.037046  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037228  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037291  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037494  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037576  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037730  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037789  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037977  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.038044  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.038262  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.038284  142411 kubeadm.go:309] 
	I0420 01:34:20.038341  142411 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:34:20.038382  142411 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:34:20.038396  142411 kubeadm.go:309] 
	I0420 01:34:20.038443  142411 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:34:20.038476  142411 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:34:20.038612  142411 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:34:20.038625  142411 kubeadm.go:309] 
	I0420 01:34:20.038735  142411 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:34:20.038767  142411 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:34:20.038794  142411 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:34:20.038808  142411 kubeadm.go:309] 
	I0420 01:34:20.038902  142411 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:34:20.038977  142411 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:34:20.038987  142411 kubeadm.go:309] 
	I0420 01:34:20.039101  142411 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:34:20.039203  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:34:20.039274  142411 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:34:20.039342  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:34:20.039384  142411 kubeadm.go:309] 
	I0420 01:34:20.039417  142411 kubeadm.go:393] duration metric: took 8m0.622979268s to StartCluster
	I0420 01:34:20.039459  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:34:20.039514  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:34:20.090236  142411 cri.go:89] found id: ""
	I0420 01:34:20.090262  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.090270  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:34:20.090276  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:34:20.090331  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:34:20.133841  142411 cri.go:89] found id: ""
	I0420 01:34:20.133867  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.133875  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:34:20.133883  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:34:20.133955  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:34:20.176186  142411 cri.go:89] found id: ""
	I0420 01:34:20.176219  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.176230  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:34:20.176235  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:34:20.176295  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:34:20.214895  142411 cri.go:89] found id: ""
	I0420 01:34:20.214932  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.214944  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:34:20.214951  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:34:20.215018  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:34:20.257759  142411 cri.go:89] found id: ""
	I0420 01:34:20.257786  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.257795  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:34:20.257800  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:34:20.257857  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:34:20.298111  142411 cri.go:89] found id: ""
	I0420 01:34:20.298153  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.298164  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:34:20.298172  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:34:20.298226  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:34:20.333435  142411 cri.go:89] found id: ""
	I0420 01:34:20.333469  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.333481  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:34:20.333489  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:34:20.333554  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:34:20.370848  142411 cri.go:89] found id: ""
	I0420 01:34:20.370872  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.370880  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:34:20.370890  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:34:20.370902  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:34:20.425495  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:34:20.425536  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:34:20.442039  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:34:20.442066  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:34:20.523456  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:34:20.523483  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:34:20.523504  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:34:20.633387  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:34:20.633427  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0420 01:34:20.688731  142411 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0420 01:34:20.688783  142411 out.go:239] * 
	W0420 01:34:20.688839  142411 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:34:20.688862  142411 out.go:239] * 
	W0420 01:34:20.689758  142411 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0420 01:34:20.693376  142411 out.go:177] 
	W0420 01:34:20.694909  142411 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:34:20.694971  142411 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0420 01:34:20.695003  142411 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0420 01:34:20.696409  142411 out.go:177] 
	
	
	==> CRI-O <==
	Apr 20 01:41:49 no-preload-338118 crio[724]: time="2024-04-20 01:41:49.931011618Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=03f244fe-8ff0-4eb0-a258-d5b5bd484608 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:41:49 no-preload-338118 crio[724]: time="2024-04-20 01:41:49.932616864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8677b429-4a1e-407a-9496-23884ebd84cc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:41:49 no-preload-338118 crio[724]: time="2024-04-20 01:41:49.933134726Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577309933109961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8677b429-4a1e-407a-9496-23884ebd84cc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:41:49 no-preload-338118 crio[724]: time="2024-04-20 01:41:49.934080578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0289680b-8654-4221-9a81-0a7ae787f315 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:41:49 no-preload-338118 crio[724]: time="2024-04-20 01:41:49.934191664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0289680b-8654-4221-9a81-0a7ae787f315 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:41:49 no-preload-338118 crio[724]: time="2024-04-20 01:41:49.934511172Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3767b5a85864a238c42e3cc300293883e42c5502260fcced065898a395927031,PodSandboxId:b33d1aec626eb1433ac85d191075dd66073501f5a366a78ec8bd16694e81cfa8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576767181067027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c12418-805f-4923-b7ab-4fa0fe07ec9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6f824527,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14222cfb746124f3cb63ed8bd73f1607212e523f11521e35a397f013eb676eb3,PodSandboxId:27853fa3c62eb7d341e02dd40a599b437d79561b0058a63303d3665b540c2b94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576766464108947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lhnxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0fb3119-abcb-4646-9aae-a54438a76adf,},Annotations:map[string]string{io.kubernetes.container.hash: 744d27ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0820d3d9e22e9b8a6a6c9b1563a916c12802fa5096ba848dbcac19f37092b2d,PodSandboxId:09e00fbbb48fd2831199a1546285d81720184d589490604df33575ce42b0ea88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576766317384440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8jvsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8
3784a0-6942-4906-ba66-76d7fa25dc04,},Annotations:map[string]string{io.kubernetes.container.hash: 5c9a26e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444f48e53751865d866ebde038d4b741a5f2c1e9a188f0835a9fb989c08122e6,PodSandboxId:89ed92966bdfe66e648259c571784d3f37474b077aba684a806c60d6f3951885,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713576765484422380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f57d9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54252f52-9bb1-48a2-98e1-980f40fa727d,},Annotations:map[string]string{io.kubernetes.container.hash: 60963711,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c43ca20df1029f80954bdaaf50aa37d7544ef1606039b3384de429587e6fdab,PodSandboxId:781d22b357d6f83fc472b8acea335f9169bc1366ac060a3e41e9644f1a2e9689,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576746081900568,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57d4d800db9704a575894ed300277d2,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41521be8a42d149366098d2a485d866fab1434a9b691ed6fc108fd46dde574fb,PodSandboxId:54a949b714e584cc49aae201c37a1b6d3f813aca2883b253b98d9d61e308020d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576746089379598,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c9d8b697029f4835cac7bf45661ef0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258f4b3a17cd33aaba1dc9bf1fb8fd978853aa0ca37193b2f22e68a87e36ac26,PodSandboxId:ee4c8021ef4d8a2e0db2561c1241e85501868ab531431f700c892d7c136bc69f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576746118139827,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d058398ee22df8b2543ed012544bc525,},Annotations:map[string]string{io.kubernetes.container.hash: fbb975a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3cefb8dc166047a93d63cc578aa1f38247d79417c2bf0a35d04fabebd1c159d,PodSandboxId:d91316e86d41c4e8fde7213da8fb6c9a78cd9b5680554264ed599da314383eb0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576746054604644,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed7f8a123467f5638e826b4e70ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 122cf7f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0289680b-8654-4221-9a81-0a7ae787f315 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:41:49 no-preload-338118 crio[724]: time="2024-04-20 01:41:49.979351950Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e0adb75-1163-49d9-965b-f9e2c91da7a7 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:41:49 no-preload-338118 crio[724]: time="2024-04-20 01:41:49.979424396Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e0adb75-1163-49d9-965b-f9e2c91da7a7 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:41:49 no-preload-338118 crio[724]: time="2024-04-20 01:41:49.980611424Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06bb66c5-53d1-478f-9538-28e9257b3265 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:41:49 no-preload-338118 crio[724]: time="2024-04-20 01:41:49.981162538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577309981133404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06bb66c5-53d1-478f-9538-28e9257b3265 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:41:49 no-preload-338118 crio[724]: time="2024-04-20 01:41:49.981726764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6621fc1a-0c3a-4191-979a-10475fb224c0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:41:49 no-preload-338118 crio[724]: time="2024-04-20 01:41:49.981777458Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6621fc1a-0c3a-4191-979a-10475fb224c0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:41:49 no-preload-338118 crio[724]: time="2024-04-20 01:41:49.981957796Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3767b5a85864a238c42e3cc300293883e42c5502260fcced065898a395927031,PodSandboxId:b33d1aec626eb1433ac85d191075dd66073501f5a366a78ec8bd16694e81cfa8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576767181067027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c12418-805f-4923-b7ab-4fa0fe07ec9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6f824527,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14222cfb746124f3cb63ed8bd73f1607212e523f11521e35a397f013eb676eb3,PodSandboxId:27853fa3c62eb7d341e02dd40a599b437d79561b0058a63303d3665b540c2b94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576766464108947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lhnxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0fb3119-abcb-4646-9aae-a54438a76adf,},Annotations:map[string]string{io.kubernetes.container.hash: 744d27ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0820d3d9e22e9b8a6a6c9b1563a916c12802fa5096ba848dbcac19f37092b2d,PodSandboxId:09e00fbbb48fd2831199a1546285d81720184d589490604df33575ce42b0ea88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576766317384440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8jvsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8
3784a0-6942-4906-ba66-76d7fa25dc04,},Annotations:map[string]string{io.kubernetes.container.hash: 5c9a26e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444f48e53751865d866ebde038d4b741a5f2c1e9a188f0835a9fb989c08122e6,PodSandboxId:89ed92966bdfe66e648259c571784d3f37474b077aba684a806c60d6f3951885,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713576765484422380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f57d9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54252f52-9bb1-48a2-98e1-980f40fa727d,},Annotations:map[string]string{io.kubernetes.container.hash: 60963711,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c43ca20df1029f80954bdaaf50aa37d7544ef1606039b3384de429587e6fdab,PodSandboxId:781d22b357d6f83fc472b8acea335f9169bc1366ac060a3e41e9644f1a2e9689,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576746081900568,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57d4d800db9704a575894ed300277d2,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41521be8a42d149366098d2a485d866fab1434a9b691ed6fc108fd46dde574fb,PodSandboxId:54a949b714e584cc49aae201c37a1b6d3f813aca2883b253b98d9d61e308020d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576746089379598,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c9d8b697029f4835cac7bf45661ef0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258f4b3a17cd33aaba1dc9bf1fb8fd978853aa0ca37193b2f22e68a87e36ac26,PodSandboxId:ee4c8021ef4d8a2e0db2561c1241e85501868ab531431f700c892d7c136bc69f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576746118139827,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d058398ee22df8b2543ed012544bc525,},Annotations:map[string]string{io.kubernetes.container.hash: fbb975a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3cefb8dc166047a93d63cc578aa1f38247d79417c2bf0a35d04fabebd1c159d,PodSandboxId:d91316e86d41c4e8fde7213da8fb6c9a78cd9b5680554264ed599da314383eb0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576746054604644,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed7f8a123467f5638e826b4e70ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 122cf7f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6621fc1a-0c3a-4191-979a-10475fb224c0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:41:50 no-preload-338118 crio[724]: time="2024-04-20 01:41:50.020759729Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70f1a385-7f34-4a2f-97cd-aa7ee5f4b8cd name=/runtime.v1.RuntimeService/Version
	Apr 20 01:41:50 no-preload-338118 crio[724]: time="2024-04-20 01:41:50.020860904Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70f1a385-7f34-4a2f-97cd-aa7ee5f4b8cd name=/runtime.v1.RuntimeService/Version
	Apr 20 01:41:50 no-preload-338118 crio[724]: time="2024-04-20 01:41:50.022259795Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92e71764-e419-43d5-bbb4-619063657073 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:41:50 no-preload-338118 crio[724]: time="2024-04-20 01:41:50.022754149Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577310022726089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92e71764-e419-43d5-bbb4-619063657073 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:41:50 no-preload-338118 crio[724]: time="2024-04-20 01:41:50.023656647Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=891b361c-8fb7-4c96-9231-b975196ef142 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:41:50 no-preload-338118 crio[724]: time="2024-04-20 01:41:50.023705917Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=891b361c-8fb7-4c96-9231-b975196ef142 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:41:50 no-preload-338118 crio[724]: time="2024-04-20 01:41:50.024654279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3767b5a85864a238c42e3cc300293883e42c5502260fcced065898a395927031,PodSandboxId:b33d1aec626eb1433ac85d191075dd66073501f5a366a78ec8bd16694e81cfa8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576767181067027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c12418-805f-4923-b7ab-4fa0fe07ec9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6f824527,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14222cfb746124f3cb63ed8bd73f1607212e523f11521e35a397f013eb676eb3,PodSandboxId:27853fa3c62eb7d341e02dd40a599b437d79561b0058a63303d3665b540c2b94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576766464108947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lhnxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0fb3119-abcb-4646-9aae-a54438a76adf,},Annotations:map[string]string{io.kubernetes.container.hash: 744d27ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0820d3d9e22e9b8a6a6c9b1563a916c12802fa5096ba848dbcac19f37092b2d,PodSandboxId:09e00fbbb48fd2831199a1546285d81720184d589490604df33575ce42b0ea88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576766317384440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8jvsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8
3784a0-6942-4906-ba66-76d7fa25dc04,},Annotations:map[string]string{io.kubernetes.container.hash: 5c9a26e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444f48e53751865d866ebde038d4b741a5f2c1e9a188f0835a9fb989c08122e6,PodSandboxId:89ed92966bdfe66e648259c571784d3f37474b077aba684a806c60d6f3951885,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713576765484422380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f57d9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54252f52-9bb1-48a2-98e1-980f40fa727d,},Annotations:map[string]string{io.kubernetes.container.hash: 60963711,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c43ca20df1029f80954bdaaf50aa37d7544ef1606039b3384de429587e6fdab,PodSandboxId:781d22b357d6f83fc472b8acea335f9169bc1366ac060a3e41e9644f1a2e9689,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576746081900568,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57d4d800db9704a575894ed300277d2,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41521be8a42d149366098d2a485d866fab1434a9b691ed6fc108fd46dde574fb,PodSandboxId:54a949b714e584cc49aae201c37a1b6d3f813aca2883b253b98d9d61e308020d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576746089379598,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c9d8b697029f4835cac7bf45661ef0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258f4b3a17cd33aaba1dc9bf1fb8fd978853aa0ca37193b2f22e68a87e36ac26,PodSandboxId:ee4c8021ef4d8a2e0db2561c1241e85501868ab531431f700c892d7c136bc69f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576746118139827,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d058398ee22df8b2543ed012544bc525,},Annotations:map[string]string{io.kubernetes.container.hash: fbb975a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3cefb8dc166047a93d63cc578aa1f38247d79417c2bf0a35d04fabebd1c159d,PodSandboxId:d91316e86d41c4e8fde7213da8fb6c9a78cd9b5680554264ed599da314383eb0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576746054604644,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed7f8a123467f5638e826b4e70ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 122cf7f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=891b361c-8fb7-4c96-9231-b975196ef142 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:41:50 no-preload-338118 crio[724]: time="2024-04-20 01:41:50.047840706Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=90d3da9f-57f6-48a4-8c0f-7749fdc759ea name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 20 01:41:50 no-preload-338118 crio[724]: time="2024-04-20 01:41:50.048067516Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b33d1aec626eb1433ac85d191075dd66073501f5a366a78ec8bd16694e81cfa8,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:51c12418-805f-4923-b7ab-4fa0fe07ec9c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713576767007380483,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c12418-805f-4923-b7ab-4fa0fe07ec9c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-
system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-20T01:32:46.697509668Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:43b0a2c2118a41e4ac4e53634c6bcb586505879f51f7e762ce530563ba4fcd59,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-xbwdm,Uid:798c7b61-a93d-4daf-a832-e15056a2ae24,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713576766712969638,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-xbwdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 798c7b61-a93d-4daf-a832-e15056a2ae24
,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T01:32:46.404916856Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:09e00fbbb48fd2831199a1546285d81720184d589490604df33575ce42b0ea88,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-8jvsz,Uid:d83784a0-6942-4906-ba66-76d7fa25dc04,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713576765455510328,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-8jvsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d83784a0-6942-4906-ba66-76d7fa25dc04,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T01:32:45.124752115Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:27853fa3c62eb7d341e02dd40a599b437d79561b0058a63303d3665b540c2b94,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-lhnxg,Uid:c0fb3119-abcb-4646-
9aae-a54438a76adf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713576765420754935,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-lhnxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0fb3119-abcb-4646-9aae-a54438a76adf,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T01:32:45.104787667Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:89ed92966bdfe66e648259c571784d3f37474b077aba684a806c60d6f3951885,Metadata:&PodSandboxMetadata{Name:kube-proxy-f57d9,Uid:54252f52-9bb1-48a2-98e1-980f40fa727d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713576765154258856,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-f57d9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54252f52-9bb1-48a2-98e1-980f40fa727d,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-20T01:32:44.840635939Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d91316e86d41c4e8fde7213da8fb6c9a78cd9b5680554264ed599da314383eb0,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-338118,Uid:a28ed7f8a123467f5638e826b4e70ce2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713576745828395337,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed7f8a123467f5638e826b4e70ce2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.89:2379,kubernetes.io/config.hash: a28ed7f8a123467f5638e826b4e70ce2,kubernetes.io/config.seen: 2024-04-20T01:32:25.354925871Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ee4c8021ef4d8a2e0db2561c1241e85501868ab531431f700c892d7c136bc69f,Meta
data:&PodSandboxMetadata{Name:kube-apiserver-no-preload-338118,Uid:d058398ee22df8b2543ed012544bc525,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713576745827018583,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d058398ee22df8b2543ed012544bc525,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.89:8443,kubernetes.io/config.hash: d058398ee22df8b2543ed012544bc525,kubernetes.io/config.seen: 2024-04-20T01:32:25.354930463Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:781d22b357d6f83fc472b8acea335f9169bc1366ac060a3e41e9644f1a2e9689,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-338118,Uid:c57d4d800db9704a575894ed300277d2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713576745826476963,Labels:map[string]string{
component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57d4d800db9704a575894ed300277d2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c57d4d800db9704a575894ed300277d2,kubernetes.io/config.seen: 2024-04-20T01:32:25.354932569Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:54a949b714e584cc49aae201c37a1b6d3f813aca2883b253b98d9d61e308020d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-338118,Uid:74c9d8b697029f4835cac7bf45661ef0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713576745819013376,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c9d8b697029f4835cac7bf45661ef0,tier: control-plane,},Annotations:map[string]string{
kubernetes.io/config.hash: 74c9d8b697029f4835cac7bf45661ef0,kubernetes.io/config.seen: 2024-04-20T01:32:25.354931660Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=90d3da9f-57f6-48a4-8c0f-7749fdc759ea name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 20 01:41:50 no-preload-338118 crio[724]: time="2024-04-20 01:41:50.048793895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9bd3313-9bf9-4cf4-92cd-f6b83fc2bcb4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:41:50 no-preload-338118 crio[724]: time="2024-04-20 01:41:50.048844777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9bd3313-9bf9-4cf4-92cd-f6b83fc2bcb4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:41:50 no-preload-338118 crio[724]: time="2024-04-20 01:41:50.049037971Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3767b5a85864a238c42e3cc300293883e42c5502260fcced065898a395927031,PodSandboxId:b33d1aec626eb1433ac85d191075dd66073501f5a366a78ec8bd16694e81cfa8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576767181067027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c12418-805f-4923-b7ab-4fa0fe07ec9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6f824527,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14222cfb746124f3cb63ed8bd73f1607212e523f11521e35a397f013eb676eb3,PodSandboxId:27853fa3c62eb7d341e02dd40a599b437d79561b0058a63303d3665b540c2b94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576766464108947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lhnxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0fb3119-abcb-4646-9aae-a54438a76adf,},Annotations:map[string]string{io.kubernetes.container.hash: 744d27ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0820d3d9e22e9b8a6a6c9b1563a916c12802fa5096ba848dbcac19f37092b2d,PodSandboxId:09e00fbbb48fd2831199a1546285d81720184d589490604df33575ce42b0ea88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576766317384440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8jvsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8
3784a0-6942-4906-ba66-76d7fa25dc04,},Annotations:map[string]string{io.kubernetes.container.hash: 5c9a26e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444f48e53751865d866ebde038d4b741a5f2c1e9a188f0835a9fb989c08122e6,PodSandboxId:89ed92966bdfe66e648259c571784d3f37474b077aba684a806c60d6f3951885,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713576765484422380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f57d9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54252f52-9bb1-48a2-98e1-980f40fa727d,},Annotations:map[string]string{io.kubernetes.container.hash: 60963711,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c43ca20df1029f80954bdaaf50aa37d7544ef1606039b3384de429587e6fdab,PodSandboxId:781d22b357d6f83fc472b8acea335f9169bc1366ac060a3e41e9644f1a2e9689,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576746081900568,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57d4d800db9704a575894ed300277d2,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41521be8a42d149366098d2a485d866fab1434a9b691ed6fc108fd46dde574fb,PodSandboxId:54a949b714e584cc49aae201c37a1b6d3f813aca2883b253b98d9d61e308020d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576746089379598,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c9d8b697029f4835cac7bf45661ef0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258f4b3a17cd33aaba1dc9bf1fb8fd978853aa0ca37193b2f22e68a87e36ac26,PodSandboxId:ee4c8021ef4d8a2e0db2561c1241e85501868ab531431f700c892d7c136bc69f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576746118139827,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d058398ee22df8b2543ed012544bc525,},Annotations:map[string]string{io.kubernetes.container.hash: fbb975a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3cefb8dc166047a93d63cc578aa1f38247d79417c2bf0a35d04fabebd1c159d,PodSandboxId:d91316e86d41c4e8fde7213da8fb6c9a78cd9b5680554264ed599da314383eb0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576746054604644,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed7f8a123467f5638e826b4e70ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 122cf7f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c9bd3313-9bf9-4cf4-92cd-f6b83fc2bcb4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3767b5a85864a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   b33d1aec626eb       storage-provisioner
	14222cfb74612       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   27853fa3c62eb       coredns-7db6d8ff4d-lhnxg
	b0820d3d9e22e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   09e00fbbb48fd       coredns-7db6d8ff4d-8jvsz
	444f48e537518       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   9 minutes ago       Running             kube-proxy                0                   89ed92966bdfe       kube-proxy-f57d9
	258f4b3a17cd3       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   9 minutes ago       Running             kube-apiserver            3                   ee4c8021ef4d8       kube-apiserver-no-preload-338118
	41521be8a42d1       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   9 minutes ago       Running             kube-controller-manager   3                   54a949b714e58       kube-controller-manager-no-preload-338118
	0c43ca20df102       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   9 minutes ago       Running             kube-scheduler            2                   781d22b357d6f       kube-scheduler-no-preload-338118
	a3cefb8dc1660       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   d91316e86d41c       etcd-no-preload-338118
	
	
	==> coredns [14222cfb746124f3cb63ed8bd73f1607212e523f11521e35a397f013eb676eb3] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b0820d3d9e22e9b8a6a6c9b1563a916c12802fa5096ba848dbcac19f37092b2d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-338118
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-338118
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=no-preload-338118
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T01_32_32_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 01:32:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-338118
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:41:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 01:37:57 +0000   Sat, 20 Apr 2024 01:32:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 01:37:57 +0000   Sat, 20 Apr 2024 01:32:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 01:37:57 +0000   Sat, 20 Apr 2024 01:32:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 01:37:57 +0000   Sat, 20 Apr 2024 01:32:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.89
	  Hostname:    no-preload-338118
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b73ffd0cf75b41f8a91992d4edaf23be
	  System UUID:                b73ffd0c-f75b-41f8-a919-92d4edaf23be
	  Boot ID:                    168082aa-1171-464e-a3a5-292a54461c4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-8jvsz                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 coredns-7db6d8ff4d-lhnxg                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-no-preload-338118                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-no-preload-338118             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-no-preload-338118    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-f57d9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-no-preload-338118             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-569cc877fc-xbwdm              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  Starting                 9m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m25s (x8 over 9m25s)  kubelet          Node no-preload-338118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m25s (x8 over 9m25s)  kubelet          Node no-preload-338118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m25s (x7 over 9m25s)  kubelet          Node no-preload-338118 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s                  kubelet          Node no-preload-338118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s                  kubelet          Node no-preload-338118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s                  kubelet          Node no-preload-338118 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m6s                   node-controller  Node no-preload-338118 event: Registered Node no-preload-338118 in Controller
	
	
	==> dmesg <==
	[  +0.053930] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.141718] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.599165] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.759981] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.853303] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.055508] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060058] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.197901] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.154711] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.339769] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[ +17.537229] systemd-fstab-generator[1233]: Ignoring "noauto" option for root device
	[  +0.061044] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.835869] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	[Apr20 01:27] kauditd_printk_skb: 84 callbacks suppressed
	[ +31.927995] kauditd_printk_skb: 55 callbacks suppressed
	[Apr20 01:28] kauditd_printk_skb: 24 callbacks suppressed
	[Apr20 01:32] kauditd_printk_skb: 8 callbacks suppressed
	[  +1.346395] systemd-fstab-generator[4077]: Ignoring "noauto" option for root device
	[  +4.637309] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.946258] systemd-fstab-generator[4405]: Ignoring "noauto" option for root device
	[ +13.407161] systemd-fstab-generator[4599]: Ignoring "noauto" option for root device
	[  +0.113132] kauditd_printk_skb: 14 callbacks suppressed
	[Apr20 01:33] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [a3cefb8dc166047a93d63cc578aa1f38247d79417c2bf0a35d04fabebd1c159d] <==
	{"level":"info","ts":"2024-04-20T01:32:26.531268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a97606ea537aa00 switched to configuration voters=(3069027699410840064)"}
	{"level":"info","ts":"2024-04-20T01:32:26.533495Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3131bac5af784039","local-member-id":"2a97606ea537aa00","added-peer-id":"2a97606ea537aa00","added-peer-peer-urls":["https://192.168.72.89:2380"]}
	{"level":"info","ts":"2024-04-20T01:32:26.56491Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-20T01:32:26.565609Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2a97606ea537aa00","initial-advertise-peer-urls":["https://192.168.72.89:2380"],"listen-peer-urls":["https://192.168.72.89:2380"],"advertise-client-urls":["https://192.168.72.89:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.89:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-20T01:32:26.565796Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-20T01:32:26.565917Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.89:2380"}
	{"level":"info","ts":"2024-04-20T01:32:26.565945Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.89:2380"}
	{"level":"info","ts":"2024-04-20T01:32:26.585426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a97606ea537aa00 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-20T01:32:26.585566Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a97606ea537aa00 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-20T01:32:26.585647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a97606ea537aa00 received MsgPreVoteResp from 2a97606ea537aa00 at term 1"}
	{"level":"info","ts":"2024-04-20T01:32:26.585686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a97606ea537aa00 became candidate at term 2"}
	{"level":"info","ts":"2024-04-20T01:32:26.585711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a97606ea537aa00 received MsgVoteResp from 2a97606ea537aa00 at term 2"}
	{"level":"info","ts":"2024-04-20T01:32:26.585737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a97606ea537aa00 became leader at term 2"}
	{"level":"info","ts":"2024-04-20T01:32:26.585762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2a97606ea537aa00 elected leader 2a97606ea537aa00 at term 2"}
	{"level":"info","ts":"2024-04-20T01:32:26.590549Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2a97606ea537aa00","local-member-attributes":"{Name:no-preload-338118 ClientURLs:[https://192.168.72.89:2379]}","request-path":"/0/members/2a97606ea537aa00/attributes","cluster-id":"3131bac5af784039","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-20T01:32:26.590792Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:32:26.59116Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:32:26.597334Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-20T01:32:26.597384Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T01:32:26.591422Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:32:26.599528Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3131bac5af784039","local-member-id":"2a97606ea537aa00","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:32:26.599669Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:32:26.602458Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:32:26.601164Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.89:2379"}
	{"level":"info","ts":"2024-04-20T01:32:26.603178Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 01:41:50 up 15 min,  0 users,  load average: 0.19, 0.19, 0.18
	Linux no-preload-338118 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [258f4b3a17cd33aaba1dc9bf1fb8fd978853aa0ca37193b2f22e68a87e36ac26] <==
	I0420 01:35:47.272259       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:37:28.746462       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:37:28.746579       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0420 01:37:29.747701       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:37:29.747880       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0420 01:37:29.747919       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:37:29.747796       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:37:29.748040       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 01:37:29.749036       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:38:29.748943       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:38:29.749033       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0420 01:38:29.749043       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:38:29.750366       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:38:29.750417       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 01:38:29.750452       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:40:29.749464       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:40:29.749567       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0420 01:40:29.749579       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:40:29.750697       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:40:29.750849       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 01:40:29.750886       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [41521be8a42d149366098d2a485d866fab1434a9b691ed6fc108fd46dde574fb] <==
	I0420 01:36:15.003058       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:36:44.502070       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:36:45.013059       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:37:14.508001       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:37:15.022852       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:37:44.514101       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:37:45.033598       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:38:14.520830       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:38:15.041993       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:38:44.527616       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:38:45.050421       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0420 01:38:46.678158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="387.721µs"
	I0420 01:38:58.674956       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="102.896µs"
	E0420 01:39:14.533856       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:39:15.065541       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:39:44.539218       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:39:45.075546       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:40:14.546449       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:40:15.085221       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:40:44.553111       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:40:45.094837       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:41:14.559180       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:41:15.109523       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:41:44.567350       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:41:45.119770       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [444f48e53751865d866ebde038d4b741a5f2c1e9a188f0835a9fb989c08122e6] <==
	I0420 01:32:46.019258       1 server_linux.go:69] "Using iptables proxy"
	I0420 01:32:46.042217       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.89"]
	I0420 01:32:46.353507       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 01:32:46.359516       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 01:32:46.359811       1 server_linux.go:165] "Using iptables Proxier"
	I0420 01:32:46.521268       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 01:32:46.521582       1 server.go:872] "Version info" version="v1.30.0"
	I0420 01:32:46.521601       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:32:46.540869       1 config.go:192] "Starting service config controller"
	I0420 01:32:46.540907       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 01:32:46.540943       1 config.go:101] "Starting endpoint slice config controller"
	I0420 01:32:46.540947       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 01:32:46.541946       1 config.go:319] "Starting node config controller"
	I0420 01:32:46.542076       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 01:32:46.641631       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 01:32:46.641708       1 shared_informer.go:320] Caches are synced for service config
	I0420 01:32:46.648329       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0c43ca20df1029f80954bdaaf50aa37d7544ef1606039b3384de429587e6fdab] <==
	W0420 01:32:28.760699       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 01:32:28.761677       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 01:32:29.682688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 01:32:29.682817       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 01:32:29.709578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0420 01:32:29.709675       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0420 01:32:29.711604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 01:32:29.711667       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 01:32:29.718270       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0420 01:32:29.718423       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0420 01:32:29.728740       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0420 01:32:29.728800       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0420 01:32:29.737965       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0420 01:32:29.737989       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0420 01:32:29.775591       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 01:32:29.775648       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 01:32:29.803652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 01:32:29.803736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0420 01:32:29.856164       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0420 01:32:29.856228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0420 01:32:29.909376       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0420 01:32:29.909432       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0420 01:32:29.926264       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0420 01:32:29.926422       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0420 01:32:32.533729       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 20 01:39:31 no-preload-338118 kubelet[4412]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:39:31 no-preload-338118 kubelet[4412]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:39:31 no-preload-338118 kubelet[4412]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:39:31 no-preload-338118 kubelet[4412]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:39:37 no-preload-338118 kubelet[4412]: E0420 01:39:37.656167    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:39:51 no-preload-338118 kubelet[4412]: E0420 01:39:51.657944    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:40:05 no-preload-338118 kubelet[4412]: E0420 01:40:05.659098    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:40:18 no-preload-338118 kubelet[4412]: E0420 01:40:18.656974    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:40:31 no-preload-338118 kubelet[4412]: E0420 01:40:31.681506    4412 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:40:31 no-preload-338118 kubelet[4412]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:40:31 no-preload-338118 kubelet[4412]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:40:31 no-preload-338118 kubelet[4412]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:40:31 no-preload-338118 kubelet[4412]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:40:33 no-preload-338118 kubelet[4412]: E0420 01:40:33.658372    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:40:47 no-preload-338118 kubelet[4412]: E0420 01:40:47.657119    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:41:01 no-preload-338118 kubelet[4412]: E0420 01:41:01.658535    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:41:12 no-preload-338118 kubelet[4412]: E0420 01:41:12.657050    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:41:24 no-preload-338118 kubelet[4412]: E0420 01:41:24.656220    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:41:31 no-preload-338118 kubelet[4412]: E0420 01:41:31.680759    4412 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:41:31 no-preload-338118 kubelet[4412]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:41:31 no-preload-338118 kubelet[4412]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:41:31 no-preload-338118 kubelet[4412]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:41:31 no-preload-338118 kubelet[4412]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:41:36 no-preload-338118 kubelet[4412]: E0420 01:41:36.656427    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:41:48 no-preload-338118 kubelet[4412]: E0420 01:41:48.657421    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	
	
	==> storage-provisioner [3767b5a85864a238c42e3cc300293883e42c5502260fcced065898a395927031] <==
	I0420 01:32:47.280814       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0420 01:32:47.292136       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0420 01:32:47.292242       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0420 01:32:47.303702       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0420 01:32:47.303830       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-338118_eac38729-d5ab-4109-971a-c3e155be402a!
	I0420 01:32:47.304630       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ae8f39ae-31e9-464c-9832-008367d3cf14", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-338118_eac38729-d5ab-4109-971a-c3e155be402a became leader
	I0420 01:32:47.404706       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-338118_eac38729-d5ab-4109-971a-c3e155be402a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-338118 -n no-preload-338118
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-338118 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-xbwdm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-338118 describe pod metrics-server-569cc877fc-xbwdm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-338118 describe pod metrics-server-569cc877fc-xbwdm: exit status 1 (74.04049ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-xbwdm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-338118 describe pod metrics-server-569cc877fc-xbwdm: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:34:36.575691   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:35:12.950780   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:35:17.520914   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:35:27.815536   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:35:47.873700   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:35:59.620450   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:36:12.219053   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:36:30.107053   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/auto-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:36:54.410290   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:37:10.921437   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:37:35.262696   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:38:11.658191   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:38:49.904993   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:38:54.475154   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:39:36.575629   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:40:27.815250   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:40:47.873480   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:41:12.218671   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:41:14.708398   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:41:30.106822   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/auto-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:41:54.410149   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:43:11.657914   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-564860 -n old-k8s-version-564860
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-564860 -n old-k8s-version-564860: exit status 2 (254.177341ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-564860" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-564860 -n old-k8s-version-564860
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-564860 -n old-k8s-version-564860: exit status 2 (242.096707ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-564860 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-564860 logs -n 25: (1.531800567s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-831611                               | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-831611                               | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-172352 | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | disable-driver-mounts-172352                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:17 UTC |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-338118             | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:17 UTC | 20 Apr 24 01:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-907988  | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC | 20 Apr 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC |                     |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-269507            | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC | 20 Apr 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-564860        | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-338118                  | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-907988       | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:30 UTC |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-269507                 | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-564860             | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 01:21:33
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 01:21:33.400343  142411 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:21:33.400444  142411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:21:33.400452  142411 out.go:304] Setting ErrFile to fd 2...
	I0420 01:21:33.400464  142411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:21:33.400681  142411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:21:33.401213  142411 out.go:298] Setting JSON to false
	I0420 01:21:33.402151  142411 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14640,"bootTime":1713561453,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 01:21:33.402214  142411 start.go:139] virtualization: kvm guest
	I0420 01:21:33.404200  142411 out.go:177] * [old-k8s-version-564860] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 01:21:33.405933  142411 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:21:33.407240  142411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:21:33.405946  142411 notify.go:220] Checking for updates...
	I0420 01:21:33.408693  142411 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:21:33.409906  142411 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:21:33.411155  142411 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 01:21:33.412528  142411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:21:33.414062  142411 config.go:182] Loaded profile config "old-k8s-version-564860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:21:33.414460  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:21:33.414524  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:21:33.428987  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37585
	I0420 01:21:33.429348  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:21:33.429850  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:21:33.429873  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:21:33.430178  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:21:33.430370  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:21:33.431825  142411 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0420 01:21:33.432895  142411 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:21:33.433209  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:21:33.433251  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:21:33.447157  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42815
	I0420 01:21:33.447543  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:21:33.448080  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:21:33.448123  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:21:33.448444  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:21:33.448609  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:21:33.481664  142411 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 01:21:33.482784  142411 start.go:297] selected driver: kvm2
	I0420 01:21:33.482796  142411 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-5
64860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:21:33.482903  142411 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:21:33.483572  142411 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:21:33.483646  142411 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 01:21:33.497421  142411 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 01:21:33.497790  142411 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:21:33.497854  142411 cni.go:84] Creating CNI manager for ""
	I0420 01:21:33.497869  142411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:21:33.497915  142411 start.go:340] cluster config:
	{Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:21:33.498027  142411 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:21:33.499624  142411 out.go:177] * Starting "old-k8s-version-564860" primary control-plane node in "old-k8s-version-564860" cluster
	I0420 01:21:33.500874  142411 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:21:33.500901  142411 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0420 01:21:33.500914  142411 cache.go:56] Caching tarball of preloaded images
	I0420 01:21:33.500992  142411 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 01:21:33.501007  142411 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0420 01:21:33.501116  142411 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/config.json ...
	I0420 01:21:33.501613  142411 start.go:360] acquireMachinesLock for old-k8s-version-564860: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:21:35.817529  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:38.889617  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:44.969590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:48.041555  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:54.121550  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:57.193604  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:03.273575  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:06.345487  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:12.425567  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:15.497538  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:21.577563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:24.649534  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:30.729573  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:33.801566  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:39.881590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:42.953591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:49.033641  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:52.105579  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:58.185591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:01.257655  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:07.337585  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:10.409568  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:16.489562  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:19.561602  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:25.641579  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:28.713581  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:34.793618  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:37.865643  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:43.945593  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:47.017561  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:53.097597  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:56.169538  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:02.249561  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:05.321557  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:11.401563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:14.473539  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:20.553591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:23.625573  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:29.705563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:32.777590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:38.857568  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:41.929619  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:48.009565  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:51.081536  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:57.161593  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:25:00.233633  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:25:03.237801  141927 start.go:364] duration metric: took 4m24.096402827s to acquireMachinesLock for "default-k8s-diff-port-907988"
	I0420 01:25:03.237873  141927 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:03.237883  141927 fix.go:54] fixHost starting: 
	I0420 01:25:03.238412  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:03.238453  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:03.254029  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36295
	I0420 01:25:03.254570  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:03.255071  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:25:03.255097  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:03.255474  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:03.255703  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:03.255871  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:25:03.257395  141927 fix.go:112] recreateIfNeeded on default-k8s-diff-port-907988: state=Stopped err=<nil>
	I0420 01:25:03.257430  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	W0420 01:25:03.257577  141927 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:03.259083  141927 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-907988" ...
	I0420 01:25:03.260199  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Start
	I0420 01:25:03.260402  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring networks are active...
	I0420 01:25:03.261176  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring network default is active
	I0420 01:25:03.261553  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring network mk-default-k8s-diff-port-907988 is active
	I0420 01:25:03.262016  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Getting domain xml...
	I0420 01:25:03.262834  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Creating domain...
	I0420 01:25:03.235208  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:03.235275  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:25:03.235620  141746 buildroot.go:166] provisioning hostname "no-preload-338118"
	I0420 01:25:03.235653  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:25:03.235902  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:25:03.237636  141746 machine.go:97] duration metric: took 4m37.412949021s to provisionDockerMachine
	I0420 01:25:03.237677  141746 fix.go:56] duration metric: took 4m37.433896084s for fixHost
	I0420 01:25:03.237685  141746 start.go:83] releasing machines lock for "no-preload-338118", held for 4m37.433927307s
	W0420 01:25:03.237715  141746 start.go:713] error starting host: provision: host is not running
	W0420 01:25:03.237980  141746 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0420 01:25:03.238076  141746 start.go:728] Will try again in 5 seconds ...
	I0420 01:25:04.453535  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting to get IP...
	I0420 01:25:04.454427  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.454803  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.454886  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.454785  143129 retry.go:31] will retry after 205.593849ms: waiting for machine to come up
	I0420 01:25:04.662560  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.663106  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.663133  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.663007  143129 retry.go:31] will retry after 246.821866ms: waiting for machine to come up
	I0420 01:25:04.911578  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.912067  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.912100  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.912014  143129 retry.go:31] will retry after 478.36287ms: waiting for machine to come up
	I0420 01:25:05.391624  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.392018  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.392063  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:05.391965  143129 retry.go:31] will retry after 495.387005ms: waiting for machine to come up
	I0420 01:25:05.888569  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.889093  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.889116  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:05.889009  143129 retry.go:31] will retry after 721.867239ms: waiting for machine to come up
	I0420 01:25:06.613018  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:06.613550  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:06.613583  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:06.613495  143129 retry.go:31] will retry after 724.502229ms: waiting for machine to come up
	I0420 01:25:07.339473  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:07.339924  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:07.339974  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:07.339883  143129 retry.go:31] will retry after 916.936196ms: waiting for machine to come up
	I0420 01:25:08.258657  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:08.259033  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:08.259064  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:08.258981  143129 retry.go:31] will retry after 1.088675043s: waiting for machine to come up
	I0420 01:25:08.239597  141746 start.go:360] acquireMachinesLock for no-preload-338118: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:25:09.349021  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:09.349421  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:09.349453  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:09.349362  143129 retry.go:31] will retry after 1.139610002s: waiting for machine to come up
	I0420 01:25:10.490715  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:10.491162  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:10.491190  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:10.491119  143129 retry.go:31] will retry after 1.625829976s: waiting for machine to come up
	I0420 01:25:12.118751  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:12.119231  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:12.119254  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:12.119184  143129 retry.go:31] will retry after 2.904309002s: waiting for machine to come up
	I0420 01:25:15.025713  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:15.026281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:15.026310  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:15.026227  143129 retry.go:31] will retry after 3.471792967s: waiting for machine to come up
	I0420 01:25:18.500247  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:18.500626  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:18.500679  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:18.500595  143129 retry.go:31] will retry after 4.499766051s: waiting for machine to come up
	I0420 01:25:23.005446  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.005935  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Found IP for machine: 192.168.39.222
	I0420 01:25:23.005956  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Reserving static IP address...
	I0420 01:25:23.005970  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has current primary IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.006453  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-907988", mac: "52:54:00:c7:22:6d", ip: "192.168.39.222"} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.006479  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Reserved static IP address: 192.168.39.222
	I0420 01:25:23.006513  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | skip adding static IP to network mk-default-k8s-diff-port-907988 - found existing host DHCP lease matching {name: "default-k8s-diff-port-907988", mac: "52:54:00:c7:22:6d", ip: "192.168.39.222"}
	I0420 01:25:23.006537  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for SSH to be available...
	I0420 01:25:23.006544  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Getting to WaitForSSH function...
	I0420 01:25:23.009090  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.009505  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.009537  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.009658  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Using SSH client type: external
	I0420 01:25:23.009695  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa (-rw-------)
	I0420 01:25:23.009732  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:25:23.009748  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | About to run SSH command:
	I0420 01:25:23.009766  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | exit 0
	I0420 01:25:23.133489  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | SSH cmd err, output: <nil>: 
	I0420 01:25:23.133940  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetConfigRaw
	I0420 01:25:23.134589  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:23.137340  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.137685  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.137708  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.138000  141927 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/config.json ...
	I0420 01:25:23.138228  141927 machine.go:94] provisionDockerMachine start ...
	I0420 01:25:23.138253  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:23.138461  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.140536  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.140815  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.140841  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.141024  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.141244  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.141450  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.141595  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.141777  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.142053  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.142067  141927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:25:23.249946  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:25:23.249979  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.250250  141927 buildroot.go:166] provisioning hostname "default-k8s-diff-port-907988"
	I0420 01:25:23.250280  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.250483  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.253030  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.253422  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.253456  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.253564  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.253755  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.253978  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.254135  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.254334  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.254504  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.254517  141927 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-907988 && echo "default-k8s-diff-port-907988" | sudo tee /etc/hostname
	I0420 01:25:23.379061  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-907988
	
	I0420 01:25:23.379092  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.381893  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.382249  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.382278  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.382465  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.382666  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.382831  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.382939  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.383118  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.383324  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.383349  141927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-907988' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-907988/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-907988' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:25:23.499869  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:23.499903  141927 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:25:23.499932  141927 buildroot.go:174] setting up certificates
	I0420 01:25:23.499941  141927 provision.go:84] configureAuth start
	I0420 01:25:23.499950  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.500178  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:23.502735  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.503050  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.503085  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.503201  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.505586  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.505924  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.505968  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.506036  141927 provision.go:143] copyHostCerts
	I0420 01:25:23.506136  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:25:23.506150  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:25:23.506233  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:25:23.506386  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:25:23.506396  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:25:23.506444  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:25:23.506525  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:25:23.506536  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:25:23.506569  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:25:23.506640  141927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-907988 san=[127.0.0.1 192.168.39.222 default-k8s-diff-port-907988 localhost minikube]
	I0420 01:25:23.598855  141927 provision.go:177] copyRemoteCerts
	I0420 01:25:23.598930  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:25:23.598967  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.602183  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.602516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.602544  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.602696  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.602903  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.603143  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.603301  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:23.688294  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:25:23.714719  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0420 01:25:23.744530  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:25:23.774733  141927 provision.go:87] duration metric: took 274.778779ms to configureAuth
	I0420 01:25:23.774756  141927 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:25:23.774990  141927 config.go:182] Loaded profile config "default-k8s-diff-port-907988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:25:23.775083  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.777817  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.778179  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.778213  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.778376  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.778596  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.778763  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.778984  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.779167  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.779364  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.779393  141927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:25:24.314463  142057 start.go:364] duration metric: took 4m32.915907541s to acquireMachinesLock for "embed-certs-269507"
	I0420 01:25:24.314618  142057 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:24.314645  142057 fix.go:54] fixHost starting: 
	I0420 01:25:24.315169  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:24.315220  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:24.331820  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43949
	I0420 01:25:24.332243  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:24.332707  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:25:24.332730  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:24.333157  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:24.333371  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:24.333551  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:25:24.335004  142057 fix.go:112] recreateIfNeeded on embed-certs-269507: state=Stopped err=<nil>
	I0420 01:25:24.335044  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	W0420 01:25:24.335211  142057 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:24.337246  142057 out.go:177] * Restarting existing kvm2 VM for "embed-certs-269507" ...
	I0420 01:25:24.056795  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:25:24.056832  141927 machine.go:97] duration metric: took 918.585863ms to provisionDockerMachine
	I0420 01:25:24.056849  141927 start.go:293] postStartSetup for "default-k8s-diff-port-907988" (driver="kvm2")
	I0420 01:25:24.056865  141927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:25:24.056889  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.057250  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:25:24.057281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.060602  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.060992  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.061028  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.061196  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.061422  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.061631  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.061785  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.152109  141927 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:25:24.157292  141927 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:25:24.157330  141927 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:25:24.157397  141927 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:25:24.157490  141927 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:25:24.157606  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:25:24.171039  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:24.201343  141927 start.go:296] duration metric: took 144.476748ms for postStartSetup
	I0420 01:25:24.201383  141927 fix.go:56] duration metric: took 20.963499628s for fixHost
	I0420 01:25:24.201409  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.204283  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.204648  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.204681  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.204842  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.205022  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.205204  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.205411  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.205732  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:24.206255  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:24.206269  141927 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:25:24.314311  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576324.296261493
	
	I0420 01:25:24.314336  141927 fix.go:216] guest clock: 1713576324.296261493
	I0420 01:25:24.314346  141927 fix.go:229] Guest: 2024-04-20 01:25:24.296261493 +0000 UTC Remote: 2024-04-20 01:25:24.201388226 +0000 UTC m=+285.207728057 (delta=94.873267ms)
	I0420 01:25:24.314373  141927 fix.go:200] guest clock delta is within tolerance: 94.873267ms
	I0420 01:25:24.314380  141927 start.go:83] releasing machines lock for "default-k8s-diff-port-907988", held for 21.076529311s
	I0420 01:25:24.314420  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.314699  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:24.317281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.317696  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.317731  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.317858  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318364  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318557  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318664  141927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:25:24.318723  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.318833  141927 ssh_runner.go:195] Run: cat /version.json
	I0420 01:25:24.318862  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.321519  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321572  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321937  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.321968  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321994  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.322011  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.322121  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.322233  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.322323  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.322502  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.322516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.322725  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.322730  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.322871  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.403742  141927 ssh_runner.go:195] Run: systemctl --version
	I0420 01:25:24.429207  141927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:25:24.590621  141927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:25:24.597818  141927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:25:24.597890  141927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:25:24.617031  141927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:25:24.617050  141927 start.go:494] detecting cgroup driver to use...
	I0420 01:25:24.617126  141927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:25:24.643134  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:25:24.658222  141927 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:25:24.658275  141927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:25:24.672409  141927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:25:24.686722  141927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:25:24.810871  141927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:25:24.965702  141927 docker.go:233] disabling docker service ...
	I0420 01:25:24.965765  141927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:25:24.984504  141927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:25:24.999580  141927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:25:25.151023  141927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:25:25.278443  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:25:25.295439  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:25:25.316425  141927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:25:25.316494  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.329052  141927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:25:25.329119  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.342102  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.354831  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.368084  141927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:25:25.380515  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.392952  141927 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.411707  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.423776  141927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:25:25.434175  141927 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:25:25.434234  141927 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:25:25.449180  141927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:25:25.460018  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:25.579669  141927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:25:25.741777  141927 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:25:25.741854  141927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:25:25.747422  141927 start.go:562] Will wait 60s for crictl version
	I0420 01:25:25.747478  141927 ssh_runner.go:195] Run: which crictl
	I0420 01:25:25.752164  141927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:25:25.800400  141927 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:25:25.800491  141927 ssh_runner.go:195] Run: crio --version
	I0420 01:25:25.832099  141927 ssh_runner.go:195] Run: crio --version
	I0420 01:25:25.865692  141927 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:25:24.338547  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Start
	I0420 01:25:24.338743  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring networks are active...
	I0420 01:25:24.339527  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring network default is active
	I0420 01:25:24.340064  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring network mk-embed-certs-269507 is active
	I0420 01:25:24.340520  142057 main.go:141] libmachine: (embed-certs-269507) Getting domain xml...
	I0420 01:25:24.341363  142057 main.go:141] libmachine: (embed-certs-269507) Creating domain...
	I0420 01:25:25.566725  142057 main.go:141] libmachine: (embed-certs-269507) Waiting to get IP...
	I0420 01:25:25.567704  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:25.568195  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:25.568263  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:25.568160  143271 retry.go:31] will retry after 229.672507ms: waiting for machine to come up
	I0420 01:25:25.799515  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:25.799964  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:25.799994  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:25.799916  143271 retry.go:31] will retry after 352.048372ms: waiting for machine to come up
	I0420 01:25:26.153710  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:26.154217  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:26.154245  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:26.154159  143271 retry.go:31] will retry after 451.404487ms: waiting for machine to come up
	I0420 01:25:25.867283  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:25.870225  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:25.870725  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:25.870748  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:25.871001  141927 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0420 01:25:25.875986  141927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:25.890923  141927 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-907988 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907
988 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:25:25.891043  141927 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:25:25.891088  141927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:25.934665  141927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:25:25.934743  141927 ssh_runner.go:195] Run: which lz4
	I0420 01:25:25.939157  141927 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:25:25.943759  141927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:25:25.943788  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:25:27.674416  141927 crio.go:462] duration metric: took 1.735279369s to copy over tarball
	I0420 01:25:27.674484  141927 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:25:26.607751  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:26.608327  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:26.608362  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:26.608273  143271 retry.go:31] will retry after 548.149542ms: waiting for machine to come up
	I0420 01:25:27.157746  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:27.158193  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:27.158220  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:27.158158  143271 retry.go:31] will retry after 543.066807ms: waiting for machine to come up
	I0420 01:25:27.702417  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:27.702812  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:27.702842  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:27.702778  143271 retry.go:31] will retry after 801.842999ms: waiting for machine to come up
	I0420 01:25:28.505673  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:28.506233  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:28.506264  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:28.506169  143271 retry.go:31] will retry after 1.176665861s: waiting for machine to come up
	I0420 01:25:29.684134  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:29.684642  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:29.684676  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:29.684582  143271 retry.go:31] will retry after 1.09397916s: waiting for machine to come up
	I0420 01:25:30.780467  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:30.780962  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:30.780987  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:30.780924  143271 retry.go:31] will retry after 1.560706704s: waiting for machine to come up
	I0420 01:25:30.280138  141927 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.605620888s)
	I0420 01:25:30.280235  141927 crio.go:469] duration metric: took 2.605784372s to extract the tarball
	I0420 01:25:30.280269  141927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:25:30.323590  141927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:30.384053  141927 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:25:30.384083  141927 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:25:30.384094  141927 kubeadm.go:928] updating node { 192.168.39.222 8444 v1.30.0 crio true true} ...
	I0420 01:25:30.384258  141927 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-907988 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907988 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:25:30.384347  141927 ssh_runner.go:195] Run: crio config
	I0420 01:25:30.431033  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:25:30.431059  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:30.431074  141927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:25:30.431094  141927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.222 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-907988 NodeName:default-k8s-diff-port-907988 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:25:30.431267  141927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.222
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-907988"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:25:30.431327  141927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:25:30.444735  141927 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:25:30.444807  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:25:30.457543  141927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0420 01:25:30.477858  141927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:25:30.497632  141927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0420 01:25:30.518062  141927 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I0420 01:25:30.522820  141927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:30.538677  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:30.686290  141927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:25:30.721316  141927 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988 for IP: 192.168.39.222
	I0420 01:25:30.721342  141927 certs.go:194] generating shared ca certs ...
	I0420 01:25:30.721373  141927 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:25:30.721607  141927 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:25:30.721664  141927 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:25:30.721679  141927 certs.go:256] generating profile certs ...
	I0420 01:25:30.721789  141927 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/client.key
	I0420 01:25:30.721873  141927 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.key.b8de10ae
	I0420 01:25:30.721912  141927 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.key
	I0420 01:25:30.722019  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:25:30.722052  141927 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:25:30.722067  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:25:30.722094  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:25:30.722122  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:25:30.722144  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:25:30.722189  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:30.723048  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:25:30.762666  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:25:30.800218  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:25:30.849282  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:25:30.893355  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0420 01:25:30.924642  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:25:30.956734  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:25:30.986491  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:25:31.015876  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:25:31.043860  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:25:31.073822  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:25:31.100731  141927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:25:31.119908  141927 ssh_runner.go:195] Run: openssl version
	I0420 01:25:31.128209  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:25:31.140164  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.145371  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.145432  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.151726  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:25:31.163371  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:25:31.175115  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.180237  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.180286  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.186548  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:25:31.198703  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:25:31.211529  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.217258  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.217326  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.223822  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:25:31.236363  141927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:25:31.241793  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:25:31.250826  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:25:31.259850  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:25:31.267387  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:25:31.274477  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:25:31.281452  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:25:31.287980  141927 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-907988 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907988
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:25:31.288094  141927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:25:31.288159  141927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:31.344552  141927 cri.go:89] found id: ""
	I0420 01:25:31.344646  141927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:25:31.357049  141927 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:25:31.357075  141927 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:25:31.357081  141927 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:25:31.357147  141927 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:25:31.368636  141927 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:25:31.370055  141927 kubeconfig.go:125] found "default-k8s-diff-port-907988" server: "https://192.168.39.222:8444"
	I0420 01:25:31.373063  141927 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:25:31.384821  141927 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.222
	I0420 01:25:31.384861  141927 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:25:31.384876  141927 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:25:31.384946  141927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:31.432801  141927 cri.go:89] found id: ""
	I0420 01:25:31.432902  141927 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:25:31.458842  141927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:25:31.472706  141927 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:25:31.472728  141927 kubeadm.go:156] found existing configuration files:
	
	I0420 01:25:31.472780  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0420 01:25:31.486221  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:25:31.486276  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:25:31.500036  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0420 01:25:31.510180  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:25:31.510237  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:25:31.520560  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0420 01:25:31.530333  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:25:31.530387  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:25:31.541053  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0420 01:25:31.551200  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:25:31.551257  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:25:31.561364  141927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:25:31.572967  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:31.690537  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.319980  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.546554  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.631937  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.729738  141927 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:25:32.729838  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.230769  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.730452  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.807772  141927 api_server.go:72] duration metric: took 1.07803345s to wait for apiserver process to appear ...
	I0420 01:25:33.807805  141927 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:25:33.807829  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:33.808551  141927 api_server.go:269] stopped: https://192.168.39.222:8444/healthz: Get "https://192.168.39.222:8444/healthz": dial tcp 192.168.39.222:8444: connect: connection refused
	I0420 01:25:32.342951  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:32.343373  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:32.343420  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:32.343352  143271 retry.go:31] will retry after 1.871100952s: waiting for machine to come up
	I0420 01:25:34.215884  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:34.216313  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:34.216341  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:34.216253  143271 retry.go:31] will retry after 2.017753728s: waiting for machine to come up
	I0420 01:25:36.237296  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:36.237906  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:36.237936  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:36.237856  143271 retry.go:31] will retry after 3.431912056s: waiting for machine to come up
	I0420 01:25:34.308465  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.098889  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:37.098928  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:37.098945  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.149496  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:37.149534  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:37.308936  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.313975  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:37.314005  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:37.808680  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.818747  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:37.818784  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:38.307905  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:38.318528  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:38.318563  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:38.808127  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:38.816135  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:38.816167  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:39.307985  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:39.313712  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:39.313753  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:39.808225  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:39.812825  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:39.812858  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:40.308366  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:40.312930  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:40.312970  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:40.808320  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:40.812979  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 200:
	ok
	I0420 01:25:40.820265  141927 api_server.go:141] control plane version: v1.30.0
	I0420 01:25:40.820289  141927 api_server.go:131] duration metric: took 7.012476869s to wait for apiserver health ...
	I0420 01:25:40.820298  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:25:40.820304  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:40.822367  141927 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:25:39.671070  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:39.671556  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:39.671614  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:39.671502  143271 retry.go:31] will retry after 3.954438708s: waiting for machine to come up
	I0420 01:25:40.823843  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:25:40.837960  141927 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:25:40.858294  141927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:25:40.867542  141927 system_pods.go:59] 8 kube-system pods found
	I0420 01:25:40.867577  141927 system_pods.go:61] "coredns-7db6d8ff4d-7v886" [0e0b3a5f-041a-4bbc-94aa-c9571a8761ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:25:40.867584  141927 system_pods.go:61] "etcd-default-k8s-diff-port-907988" [88f687c4-8865-4fe6-92f1-448cfde6117c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:25:40.867590  141927 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-907988" [2c9f0d90-35c6-45ad-b9b1-9504c55a1e18] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:25:40.867597  141927 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-907988" [949ce449-06b4-4650-8ba0-7567637d6aec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:25:40.867604  141927 system_pods.go:61] "kube-proxy-dg6xn" [1124d9e8-41aa-44a9-8a4a-eafd2cd6c6c9] Running
	I0420 01:25:40.867626  141927 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-907988" [df93de11-c23d-4f5d-afd4-1af7928933fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0420 01:25:40.867640  141927 system_pods.go:61] "metrics-server-569cc877fc-rqqlt" [2c7d91c3-fce8-4603-a7be-8d9b415d71f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:25:40.867647  141927 system_pods.go:61] "storage-provisioner" [af4dc99d-feef-4c24-852a-4c8cad22dd7d] Running
	I0420 01:25:40.867654  141927 system_pods.go:74] duration metric: took 9.33485ms to wait for pod list to return data ...
	I0420 01:25:40.867670  141927 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:25:40.871045  141927 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:25:40.871067  141927 node_conditions.go:123] node cpu capacity is 2
	I0420 01:25:40.871078  141927 node_conditions.go:105] duration metric: took 3.402743ms to run NodePressure ...
	I0420 01:25:40.871094  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:41.142438  141927 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:25:41.151801  141927 kubeadm.go:733] kubelet initialised
	I0420 01:25:41.151822  141927 kubeadm.go:734] duration metric: took 9.359538ms waiting for restarted kubelet to initialise ...
	I0420 01:25:41.151830  141927 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:25:41.160583  141927 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.169184  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.169214  141927 pod_ready.go:81] duration metric: took 8.596607ms for pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.169226  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.169234  141927 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.175518  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.175544  141927 pod_ready.go:81] duration metric: took 6.298273ms for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.175558  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.175567  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.189038  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.189062  141927 pod_ready.go:81] duration metric: took 13.484198ms for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.189072  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.189078  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.261162  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.261191  141927 pod_ready.go:81] duration metric: took 72.106763ms for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.261203  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.261210  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dg6xn" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.662532  141927 pod_ready.go:92] pod "kube-proxy-dg6xn" in "kube-system" namespace has status "Ready":"True"
	I0420 01:25:41.662553  141927 pod_ready.go:81] duration metric: took 401.337101ms for pod "kube-proxy-dg6xn" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.662562  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:43.670281  141927 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:45.122924  142411 start.go:364] duration metric: took 4m11.621269498s to acquireMachinesLock for "old-k8s-version-564860"
	I0420 01:25:45.122996  142411 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:45.123018  142411 fix.go:54] fixHost starting: 
	I0420 01:25:45.123538  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:45.123581  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:45.141340  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0420 01:25:45.141873  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:45.142555  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:25:45.142592  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:45.142979  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:45.143234  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:25:45.143426  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetState
	I0420 01:25:45.145067  142411 fix.go:112] recreateIfNeeded on old-k8s-version-564860: state=Stopped err=<nil>
	I0420 01:25:45.145114  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	W0420 01:25:45.145289  142411 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:45.147498  142411 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-564860" ...
	I0420 01:25:43.630616  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.631126  142057 main.go:141] libmachine: (embed-certs-269507) Found IP for machine: 192.168.50.184
	I0420 01:25:43.631159  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has current primary IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.631173  142057 main.go:141] libmachine: (embed-certs-269507) Reserving static IP address...
	I0420 01:25:43.631625  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "embed-certs-269507", mac: "52:54:00:5d:0f:ba", ip: "192.168.50.184"} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.631677  142057 main.go:141] libmachine: (embed-certs-269507) DBG | skip adding static IP to network mk-embed-certs-269507 - found existing host DHCP lease matching {name: "embed-certs-269507", mac: "52:54:00:5d:0f:ba", ip: "192.168.50.184"}
	I0420 01:25:43.631692  142057 main.go:141] libmachine: (embed-certs-269507) Reserved static IP address: 192.168.50.184
	I0420 01:25:43.631710  142057 main.go:141] libmachine: (embed-certs-269507) Waiting for SSH to be available...
	I0420 01:25:43.631731  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Getting to WaitForSSH function...
	I0420 01:25:43.634292  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.634614  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.634650  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.634833  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Using SSH client type: external
	I0420 01:25:43.634883  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa (-rw-------)
	I0420 01:25:43.634916  142057 main.go:141] libmachine: (embed-certs-269507) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:25:43.634935  142057 main.go:141] libmachine: (embed-certs-269507) DBG | About to run SSH command:
	I0420 01:25:43.634949  142057 main.go:141] libmachine: (embed-certs-269507) DBG | exit 0
	I0420 01:25:43.757712  142057 main.go:141] libmachine: (embed-certs-269507) DBG | SSH cmd err, output: <nil>: 
	I0420 01:25:43.758118  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetConfigRaw
	I0420 01:25:43.758820  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:43.761626  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.762007  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.762083  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.762328  142057 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/config.json ...
	I0420 01:25:43.762556  142057 machine.go:94] provisionDockerMachine start ...
	I0420 01:25:43.762575  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:43.762827  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:43.765841  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.766277  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.766304  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.766461  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:43.766636  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.766766  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.766884  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:43.767111  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:43.767371  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:43.767386  142057 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:25:43.874709  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:25:43.874741  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:43.875018  142057 buildroot.go:166] provisioning hostname "embed-certs-269507"
	I0420 01:25:43.875052  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:43.875265  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:43.878226  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.878645  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.878675  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.878767  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:43.878976  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.879120  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.879246  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:43.879375  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:43.879585  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:43.879613  142057 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-269507 && echo "embed-certs-269507" | sudo tee /etc/hostname
	I0420 01:25:44.003458  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-269507
	
	I0420 01:25:44.003502  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.006277  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.006706  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.006745  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.006922  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.007227  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.007417  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.007604  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.007772  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:44.007959  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:44.007979  142057 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-269507' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-269507/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-269507' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:25:44.124457  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:44.124494  142057 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:25:44.124516  142057 buildroot.go:174] setting up certificates
	I0420 01:25:44.124526  142057 provision.go:84] configureAuth start
	I0420 01:25:44.124537  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:44.124850  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:44.127589  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.127958  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.127980  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.128196  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.130485  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.130792  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.130830  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.130992  142057 provision.go:143] copyHostCerts
	I0420 01:25:44.131060  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:25:44.131075  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:25:44.131132  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:25:44.131237  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:25:44.131246  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:25:44.131266  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:25:44.131326  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:25:44.131333  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:25:44.131349  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:25:44.131397  142057 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.embed-certs-269507 san=[127.0.0.1 192.168.50.184 embed-certs-269507 localhost minikube]
	I0420 01:25:44.404404  142057 provision.go:177] copyRemoteCerts
	I0420 01:25:44.404469  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:25:44.404498  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.407318  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.407650  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.407683  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.407850  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.408033  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.408182  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.408307  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:44.498069  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:25:44.524979  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0420 01:25:44.553537  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 01:25:44.580307  142057 provision.go:87] duration metric: took 455.767679ms to configureAuth
	I0420 01:25:44.580332  142057 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:25:44.580609  142057 config.go:182] Loaded profile config "embed-certs-269507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:25:44.580722  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.583352  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.583728  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.583761  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.583978  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.584205  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.584383  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.584516  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.584715  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:44.584905  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:44.584926  142057 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:25:44.882565  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:25:44.882599  142057 machine.go:97] duration metric: took 1.120028956s to provisionDockerMachine
	I0420 01:25:44.882612  142057 start.go:293] postStartSetup for "embed-certs-269507" (driver="kvm2")
	I0420 01:25:44.882622  142057 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:25:44.882639  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:44.882971  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:25:44.883012  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.885829  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.886181  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.886208  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.886372  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.886598  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.886761  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.886915  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:44.972428  142057 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:25:44.977228  142057 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:25:44.977257  142057 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:25:44.977344  142057 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:25:44.977435  142057 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:25:44.977552  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:25:44.987372  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:45.014435  142057 start.go:296] duration metric: took 131.807177ms for postStartSetup
	I0420 01:25:45.014484  142057 fix.go:56] duration metric: took 20.699839101s for fixHost
	I0420 01:25:45.014512  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.017361  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.017768  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.017795  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.017943  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.018150  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.018302  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.018421  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.018643  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:45.018815  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:45.018827  142057 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:25:45.122766  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576345.101529100
	
	I0420 01:25:45.122788  142057 fix.go:216] guest clock: 1713576345.101529100
	I0420 01:25:45.122796  142057 fix.go:229] Guest: 2024-04-20 01:25:45.1015291 +0000 UTC Remote: 2024-04-20 01:25:45.014489313 +0000 UTC m=+293.764572165 (delta=87.039787ms)
	I0420 01:25:45.122823  142057 fix.go:200] guest clock delta is within tolerance: 87.039787ms
	I0420 01:25:45.122828  142057 start.go:83] releasing machines lock for "embed-certs-269507", held for 20.808247089s
	I0420 01:25:45.122851  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.123156  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:45.125956  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.126377  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.126408  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.126536  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127059  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127264  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127349  142057 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:25:45.127404  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.127470  142057 ssh_runner.go:195] Run: cat /version.json
	I0420 01:25:45.127497  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.130071  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130393  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130427  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.130447  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130727  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.130825  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.130854  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130932  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.131041  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.131115  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.131220  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:45.131301  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.131451  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.131597  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:45.211824  142057 ssh_runner.go:195] Run: systemctl --version
	I0420 01:25:45.236425  142057 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:25:45.383069  142057 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:25:45.391072  142057 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:25:45.391159  142057 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:25:45.410287  142057 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:25:45.410313  142057 start.go:494] detecting cgroup driver to use...
	I0420 01:25:45.410395  142057 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:25:45.433663  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:25:45.452933  142057 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:25:45.452999  142057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:25:45.473208  142057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:25:45.493261  142057 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:25:45.650111  142057 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:25:45.847482  142057 docker.go:233] disabling docker service ...
	I0420 01:25:45.847559  142057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:25:45.871032  142057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:25:45.892747  142057 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:25:46.076222  142057 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:25:46.218078  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:25:46.236006  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:25:46.259279  142057 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:25:46.259363  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.272573  142057 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:25:46.272647  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.286468  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.298708  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.313197  142057 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:25:46.332844  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.345531  142057 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.367686  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.379702  142057 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:25:46.390491  142057 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:25:46.390558  142057 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:25:46.406027  142057 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:25:46.417370  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:46.543690  142057 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:25:46.725507  142057 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:25:46.725599  142057 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:25:46.734173  142057 start.go:562] Will wait 60s for crictl version
	I0420 01:25:46.734246  142057 ssh_runner.go:195] Run: which crictl
	I0420 01:25:46.740381  142057 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:25:46.801341  142057 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:25:46.801431  142057 ssh_runner.go:195] Run: crio --version
	I0420 01:25:46.843121  142057 ssh_runner.go:195] Run: crio --version
	I0420 01:25:46.889958  142057 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:25:45.148885  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .Start
	I0420 01:25:45.149115  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring networks are active...
	I0420 01:25:45.149856  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring network default is active
	I0420 01:25:45.150205  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring network mk-old-k8s-version-564860 is active
	I0420 01:25:45.150615  142411 main.go:141] libmachine: (old-k8s-version-564860) Getting domain xml...
	I0420 01:25:45.151296  142411 main.go:141] libmachine: (old-k8s-version-564860) Creating domain...
	I0420 01:25:46.465532  142411 main.go:141] libmachine: (old-k8s-version-564860) Waiting to get IP...
	I0420 01:25:46.466816  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.467306  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.467383  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.467288  143434 retry.go:31] will retry after 265.980653ms: waiting for machine to come up
	I0420 01:25:46.735144  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.735676  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.735700  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.735627  143434 retry.go:31] will retry after 254.534112ms: waiting for machine to come up
	I0420 01:25:46.992222  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.992707  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.992738  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.992621  143434 retry.go:31] will retry after 434.179962ms: waiting for machine to come up
	I0420 01:25:47.428397  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:47.428949  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:47.428987  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:47.428899  143434 retry.go:31] will retry after 533.143168ms: waiting for machine to come up
	I0420 01:25:47.963467  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:47.964008  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:47.964035  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:47.963957  143434 retry.go:31] will retry after 601.536298ms: waiting for machine to come up
	I0420 01:25:45.675159  141927 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:48.175457  141927 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:25:48.175487  141927 pod_ready.go:81] duration metric: took 6.512916578s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:48.175499  141927 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:46.891233  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:46.894647  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:46.895107  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:46.895170  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:46.895398  142057 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0420 01:25:46.900604  142057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:46.920025  142057 kubeadm.go:877] updating cluster {Name:embed-certs-269507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:25:46.920184  142057 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:25:46.920247  142057 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:46.967086  142057 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:25:46.967171  142057 ssh_runner.go:195] Run: which lz4
	I0420 01:25:46.973391  142057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:25:46.979210  142057 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:25:46.979241  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:25:48.806615  142057 crio.go:462] duration metric: took 1.83326325s to copy over tarball
	I0420 01:25:48.806701  142057 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:25:48.567922  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:48.568436  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:48.568469  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:48.568387  143434 retry.go:31] will retry after 853.809635ms: waiting for machine to come up
	I0420 01:25:49.423590  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:49.424154  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:49.424178  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:49.424099  143434 retry.go:31] will retry after 1.096859163s: waiting for machine to come up
	I0420 01:25:50.522906  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:50.523406  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:50.523436  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:50.523350  143434 retry.go:31] will retry after 983.057252ms: waiting for machine to come up
	I0420 01:25:51.508033  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:51.508557  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:51.508596  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:51.508497  143434 retry.go:31] will retry after 1.463876638s: waiting for machine to come up
	I0420 01:25:52.974032  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:52.974508  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:52.974536  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:52.974459  143434 retry.go:31] will retry after 1.859889372s: waiting for machine to come up
	I0420 01:25:50.183489  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:53.262055  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:51.389972  142057 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.583237436s)
	I0420 01:25:51.390002  142057 crio.go:469] duration metric: took 2.583356337s to extract the tarball
	I0420 01:25:51.390010  142057 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:25:51.434741  142057 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:51.489945  142057 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:25:51.489974  142057 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:25:51.489984  142057 kubeadm.go:928] updating node { 192.168.50.184 8443 v1.30.0 crio true true} ...
	I0420 01:25:51.490126  142057 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-269507 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:25:51.490226  142057 ssh_runner.go:195] Run: crio config
	I0420 01:25:51.548273  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:25:51.548299  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:51.548316  142057 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:25:51.548356  142057 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-269507 NodeName:embed-certs-269507 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:25:51.548534  142057 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-269507"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:25:51.548614  142057 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:25:51.560359  142057 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:25:51.560428  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:25:51.571609  142057 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0420 01:25:51.594462  142057 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:25:51.621417  142057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0420 01:25:51.649250  142057 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0420 01:25:51.655304  142057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:51.675476  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:51.809652  142057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:25:51.829341  142057 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507 for IP: 192.168.50.184
	I0420 01:25:51.829405  142057 certs.go:194] generating shared ca certs ...
	I0420 01:25:51.829430  142057 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:25:51.829627  142057 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:25:51.829687  142057 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:25:51.829697  142057 certs.go:256] generating profile certs ...
	I0420 01:25:51.829823  142057 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/client.key
	I0420 01:25:52.088423  142057 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.key.c1e63643
	I0420 01:25:52.088542  142057 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.key
	I0420 01:25:52.088748  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:25:52.088811  142057 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:25:52.088841  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:25:52.088880  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:25:52.088919  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:25:52.088959  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:25:52.089020  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:52.090046  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:25:52.130739  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:25:52.163426  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:25:52.202470  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:25:52.232070  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0420 01:25:52.265640  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:25:52.305670  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:25:52.336788  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:25:52.371507  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:25:52.403015  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:25:52.433761  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:25:52.461373  142057 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:25:52.480675  142057 ssh_runner.go:195] Run: openssl version
	I0420 01:25:52.486965  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:25:52.499466  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.506355  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.506409  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.514625  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:25:52.530107  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:25:52.544051  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.549426  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.549495  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.555960  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:25:52.569332  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:25:52.583057  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.588323  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.588390  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.594622  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:25:52.607021  142057 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:25:52.612270  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:25:52.619182  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:25:52.626168  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:25:52.633276  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:25:52.639840  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:25:52.646478  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:25:52.652982  142057 kubeadm.go:391] StartCluster: {Name:embed-certs-269507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:25:52.653130  142057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:25:52.653182  142057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:52.699113  142057 cri.go:89] found id: ""
	I0420 01:25:52.699200  142057 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:25:52.712835  142057 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:25:52.712859  142057 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:25:52.712867  142057 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:25:52.712914  142057 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:25:52.726130  142057 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:25:52.727354  142057 kubeconfig.go:125] found "embed-certs-269507" server: "https://192.168.50.184:8443"
	I0420 01:25:52.729600  142057 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:25:52.744185  142057 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.184
	I0420 01:25:52.744217  142057 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:25:52.744231  142057 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:25:52.744292  142057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:52.792889  142057 cri.go:89] found id: ""
	I0420 01:25:52.792967  142057 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:25:52.812771  142057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:25:52.824478  142057 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:25:52.824495  142057 kubeadm.go:156] found existing configuration files:
	
	I0420 01:25:52.824533  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:25:52.835612  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:25:52.835679  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:25:52.847089  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:25:52.858049  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:25:52.858126  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:25:52.872787  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:25:52.886588  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:25:52.886649  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:25:52.899467  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:25:52.910884  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:25:52.910942  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:25:52.922217  142057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:25:52.933432  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:53.108167  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.044709  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.257949  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.327450  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.426738  142057 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:25:54.426849  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:54.926955  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:55.427198  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:55.489075  142057 api_server.go:72] duration metric: took 1.06233038s to wait for apiserver process to appear ...
	I0420 01:25:55.489109  142057 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:25:55.489137  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:55.489682  142057 api_server.go:269] stopped: https://192.168.50.184:8443/healthz: Get "https://192.168.50.184:8443/healthz": dial tcp 192.168.50.184:8443: connect: connection refused
	I0420 01:25:55.989278  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:54.836137  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:54.836639  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:54.836670  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:54.836584  143434 retry.go:31] will retry after 2.172259495s: waiting for machine to come up
	I0420 01:25:57.011412  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:57.011810  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:57.011840  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:57.011782  143434 retry.go:31] will retry after 2.279304552s: waiting for machine to come up
	I0420 01:25:55.684205  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:57.686312  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:58.334562  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:58.334594  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:58.334614  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.344779  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:58.344814  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:58.490111  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.499158  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:58.499194  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:58.989417  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.996443  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:58.996477  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:59.489585  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:59.496235  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:59.496271  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:59.989892  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:59.994154  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0420 01:26:00.000276  142057 api_server.go:141] control plane version: v1.30.0
	I0420 01:26:00.000301  142057 api_server.go:131] duration metric: took 4.511183577s to wait for apiserver health ...
	I0420 01:26:00.000311  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:26:00.000317  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:00.002217  142057 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:26:00.003646  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:26:00.018114  142057 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:26:00.040866  142057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:26:00.050481  142057 system_pods.go:59] 8 kube-system pods found
	I0420 01:26:00.050514  142057 system_pods.go:61] "coredns-7db6d8ff4d-79bzc" [af5f0029-75b5-4131-8c60-5a4fee48c618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:26:00.050524  142057 system_pods.go:61] "etcd-embed-certs-269507" [d6dfc301-0cfb-4bfb-99f7-948b77b38f53] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:26:00.050533  142057 system_pods.go:61] "kube-apiserver-embed-certs-269507" [915deee2-f571-4337-bcdc-07f40d06b9c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:26:00.050539  142057 system_pods.go:61] "kube-controller-manager-embed-certs-269507" [21c885b0-6d1b-4593-87f3-141e512af7dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:26:00.050545  142057 system_pods.go:61] "kube-proxy-crzk6" [d5972e9a-15cd-4b62-90d5-c10bdfa20989] Running
	I0420 01:26:00.050553  142057 system_pods.go:61] "kube-scheduler-embed-certs-269507" [1e556102-d4c9-494c-baf2-ab7e62d7d1e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0420 01:26:00.050559  142057 system_pods.go:61] "metrics-server-569cc877fc-8s79l" [1dc06e4a-3f47-4ef1-8757-81262c52fe55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:26:00.050583  142057 system_pods.go:61] "storage-provisioner" [f7b03907-0042-48d8-981b-1b8e665d58e7] Running
	I0420 01:26:00.050600  142057 system_pods.go:74] duration metric: took 9.699819ms to wait for pod list to return data ...
	I0420 01:26:00.050608  142057 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:26:00.053915  142057 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:26:00.053964  142057 node_conditions.go:123] node cpu capacity is 2
	I0420 01:26:00.053975  142057 node_conditions.go:105] duration metric: took 3.363162ms to run NodePressure ...
	I0420 01:26:00.053994  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:00.327736  142057 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:26:00.332409  142057 kubeadm.go:733] kubelet initialised
	I0420 01:26:00.332434  142057 kubeadm.go:734] duration metric: took 4.671334ms waiting for restarted kubelet to initialise ...
	I0420 01:26:00.332446  142057 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:26:00.338296  142057 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:59.292382  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:59.292905  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:59.292939  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:59.292852  143434 retry.go:31] will retry after 4.056028382s: waiting for machine to come up
	I0420 01:26:03.350591  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:03.351022  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:26:03.351047  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:26:03.350978  143434 retry.go:31] will retry after 5.38819739s: waiting for machine to come up
	I0420 01:26:00.184338  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:02.684685  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:02.345607  142057 pod_ready.go:102] pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:03.850887  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:03.850915  142057 pod_ready.go:81] duration metric: took 3.512592061s for pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:03.850929  142057 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:05.857665  142057 pod_ready.go:102] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:05.183082  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:07.682906  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:10.191165  141746 start.go:364] duration metric: took 1m1.9514957s to acquireMachinesLock for "no-preload-338118"
	I0420 01:26:10.191222  141746 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:26:10.191235  141746 fix.go:54] fixHost starting: 
	I0420 01:26:10.191624  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:26:10.191668  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:26:10.212169  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34829
	I0420 01:26:10.212568  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:26:10.213074  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:26:10.213120  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:26:10.213524  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:26:10.213755  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:10.213957  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:26:10.215578  141746 fix.go:112] recreateIfNeeded on no-preload-338118: state=Stopped err=<nil>
	I0420 01:26:10.215604  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	W0420 01:26:10.215788  141746 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:26:10.217632  141746 out.go:177] * Restarting existing kvm2 VM for "no-preload-338118" ...
	I0420 01:26:10.218915  141746 main.go:141] libmachine: (no-preload-338118) Calling .Start
	I0420 01:26:10.219094  141746 main.go:141] libmachine: (no-preload-338118) Ensuring networks are active...
	I0420 01:26:10.219820  141746 main.go:141] libmachine: (no-preload-338118) Ensuring network default is active
	I0420 01:26:10.220181  141746 main.go:141] libmachine: (no-preload-338118) Ensuring network mk-no-preload-338118 is active
	I0420 01:26:10.220584  141746 main.go:141] libmachine: (no-preload-338118) Getting domain xml...
	I0420 01:26:10.221275  141746 main.go:141] libmachine: (no-preload-338118) Creating domain...
	I0420 01:26:08.363522  142057 pod_ready.go:102] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:09.858701  142057 pod_ready.go:92] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:09.858731  142057 pod_ready.go:81] duration metric: took 6.007793209s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:09.858742  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:08.743367  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.743867  142411 main.go:141] libmachine: (old-k8s-version-564860) Found IP for machine: 192.168.61.91
	I0420 01:26:08.743896  142411 main.go:141] libmachine: (old-k8s-version-564860) Reserving static IP address...
	I0420 01:26:08.743914  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has current primary IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.744294  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "old-k8s-version-564860", mac: "52:54:00:9d:63:09", ip: "192.168.61.91"} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.744324  142411 main.go:141] libmachine: (old-k8s-version-564860) Reserved static IP address: 192.168.61.91
	I0420 01:26:08.744344  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | skip adding static IP to network mk-old-k8s-version-564860 - found existing host DHCP lease matching {name: "old-k8s-version-564860", mac: "52:54:00:9d:63:09", ip: "192.168.61.91"}
	I0420 01:26:08.744368  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Getting to WaitForSSH function...
	I0420 01:26:08.744387  142411 main.go:141] libmachine: (old-k8s-version-564860) Waiting for SSH to be available...
	I0420 01:26:08.746714  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.747119  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.747155  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.747278  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Using SSH client type: external
	I0420 01:26:08.747314  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa (-rw-------)
	I0420 01:26:08.747346  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:26:08.747359  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | About to run SSH command:
	I0420 01:26:08.747373  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | exit 0
	I0420 01:26:08.877633  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | SSH cmd err, output: <nil>: 
	I0420 01:26:08.878016  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetConfigRaw
	I0420 01:26:08.878715  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:08.881556  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.881982  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.882028  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.882326  142411 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/config.json ...
	I0420 01:26:08.882586  142411 machine.go:94] provisionDockerMachine start ...
	I0420 01:26:08.882613  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:08.882853  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:08.885133  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.885479  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.885510  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.885647  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:08.885843  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:08.886029  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:08.886192  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:08.886403  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:08.886642  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:08.886657  142411 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:26:09.006625  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:26:09.006655  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.006914  142411 buildroot.go:166] provisioning hostname "old-k8s-version-564860"
	I0420 01:26:09.006940  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.007144  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.010016  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.010349  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.010374  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.010597  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.010841  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.011040  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.011235  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.011439  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.011682  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.011718  142411 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-564860 && echo "old-k8s-version-564860" | sudo tee /etc/hostname
	I0420 01:26:09.155581  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-564860
	
	I0420 01:26:09.155612  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.158583  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.159021  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.159068  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.159285  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.159519  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.159747  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.159933  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.160128  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.160362  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.160390  142411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-564860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-564860/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-564860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:26:09.288804  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:26:09.288834  142411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:26:09.288856  142411 buildroot.go:174] setting up certificates
	I0420 01:26:09.288867  142411 provision.go:84] configureAuth start
	I0420 01:26:09.288877  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.289286  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:09.292454  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.292884  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.292923  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.293076  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.295234  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.295537  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.295565  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.295675  142411 provision.go:143] copyHostCerts
	I0420 01:26:09.295747  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:26:09.295758  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:26:09.295811  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:26:09.295936  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:26:09.295951  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:26:09.295981  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:26:09.296063  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:26:09.296075  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:26:09.296095  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:26:09.296154  142411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-564860 san=[127.0.0.1 192.168.61.91 localhost minikube old-k8s-version-564860]
	I0420 01:26:09.436313  142411 provision.go:177] copyRemoteCerts
	I0420 01:26:09.436373  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:26:09.436401  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.439316  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.439700  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.439743  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.439856  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.440057  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.440226  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.440360  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:09.529141  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:26:09.558376  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0420 01:26:09.586393  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:26:09.615274  142411 provision.go:87] duration metric: took 326.393984ms to configureAuth
	I0420 01:26:09.615300  142411 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:26:09.615501  142411 config.go:182] Loaded profile config "old-k8s-version-564860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:26:09.615590  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.618470  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.618905  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.618938  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.619141  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.619325  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.619505  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.619662  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.619862  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.620073  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.620091  142411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:26:09.924929  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:26:09.924958  142411 machine.go:97] duration metric: took 1.042352034s to provisionDockerMachine
	I0420 01:26:09.924973  142411 start.go:293] postStartSetup for "old-k8s-version-564860" (driver="kvm2")
	I0420 01:26:09.924985  142411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:26:09.925021  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:09.925441  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:26:09.925485  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.927985  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.928377  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.928407  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.928565  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.928770  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.928944  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.929114  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.020189  142411 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:26:10.025578  142411 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:26:10.025607  142411 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:26:10.025707  142411 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:26:10.025795  142411 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:26:10.025888  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:26:10.038138  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:10.065063  142411 start.go:296] duration metric: took 140.07164ms for postStartSetup
	I0420 01:26:10.065111  142411 fix.go:56] duration metric: took 24.94209431s for fixHost
	I0420 01:26:10.065139  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.068099  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.068493  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.068544  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.068697  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.068916  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.069114  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.069255  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.069455  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:10.069662  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:10.069678  142411 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:26:10.190955  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576370.174630368
	
	I0420 01:26:10.190984  142411 fix.go:216] guest clock: 1713576370.174630368
	I0420 01:26:10.190994  142411 fix.go:229] Guest: 2024-04-20 01:26:10.174630368 +0000 UTC Remote: 2024-04-20 01:26:10.065116719 +0000 UTC m=+276.709087933 (delta=109.513649ms)
	I0420 01:26:10.191036  142411 fix.go:200] guest clock delta is within tolerance: 109.513649ms
	I0420 01:26:10.191044  142411 start.go:83] releasing machines lock for "old-k8s-version-564860", held for 25.068071712s
	I0420 01:26:10.191074  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.191368  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:10.194872  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.195333  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.195365  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.195510  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196060  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196253  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196331  142411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:26:10.196375  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.196439  142411 ssh_runner.go:195] Run: cat /version.json
	I0420 01:26:10.196467  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.199156  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199522  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199557  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.199572  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199760  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.199975  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.200098  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.200137  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.200165  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.200326  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.200700  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.200857  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.200992  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.201150  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.283430  142411 ssh_runner.go:195] Run: systemctl --version
	I0420 01:26:10.310703  142411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:26:10.462457  142411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:26:10.470897  142411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:26:10.470993  142411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:26:10.489867  142411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:26:10.489899  142411 start.go:494] detecting cgroup driver to use...
	I0420 01:26:10.489996  142411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:26:10.512741  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:26:10.530013  142411 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:26:10.530077  142411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:26:10.548567  142411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:26:10.565645  142411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:26:10.693390  142411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:26:10.878889  142411 docker.go:233] disabling docker service ...
	I0420 01:26:10.878973  142411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:26:10.901233  142411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:26:10.915219  142411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:26:11.053815  142411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:26:11.201766  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:26:11.218569  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:26:11.240543  142411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0420 01:26:11.240604  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.253384  142411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:26:11.253460  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.268703  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.281575  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.296477  142411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:26:11.312458  142411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:26:11.328008  142411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:26:11.328076  142411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:26:11.349027  142411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:26:11.362064  142411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:11.500624  142411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:26:11.665985  142411 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:26:11.666061  142411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:26:11.672929  142411 start.go:562] Will wait 60s for crictl version
	I0420 01:26:11.673006  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:11.678398  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:26:11.727572  142411 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:26:11.727663  142411 ssh_runner.go:195] Run: crio --version
	I0420 01:26:11.760504  142411 ssh_runner.go:195] Run: crio --version
	I0420 01:26:11.803463  142411 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0420 01:26:11.804782  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:11.807755  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:11.808135  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:11.808177  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:11.808396  142411 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0420 01:26:11.813653  142411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:11.830618  142411 kubeadm.go:877] updating cluster {Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:26:11.830793  142411 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:26:11.830874  142411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:11.889149  142411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:26:11.889218  142411 ssh_runner.go:195] Run: which lz4
	I0420 01:26:11.894461  142411 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:26:11.900427  142411 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:26:11.900456  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0420 01:26:10.183110  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:12.184209  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:11.636722  141746 main.go:141] libmachine: (no-preload-338118) Waiting to get IP...
	I0420 01:26:11.637635  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:11.638048  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:11.638135  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:11.638011  143635 retry.go:31] will retry after 264.135122ms: waiting for machine to come up
	I0420 01:26:11.903486  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:11.904008  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:11.904053  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:11.903958  143635 retry.go:31] will retry after 367.952741ms: waiting for machine to come up
	I0420 01:26:12.273951  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:12.274547  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:12.274584  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:12.274491  143635 retry.go:31] will retry after 390.958735ms: waiting for machine to come up
	I0420 01:26:12.667348  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:12.667888  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:12.667915  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:12.667820  143635 retry.go:31] will retry after 554.212994ms: waiting for machine to come up
	I0420 01:26:13.223423  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:13.224158  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:13.224184  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:13.224058  143635 retry.go:31] will retry after 686.102207ms: waiting for machine to come up
	I0420 01:26:13.911430  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:13.912019  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:13.912042  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:13.911968  143635 retry.go:31] will retry after 875.263983ms: waiting for machine to come up
	I0420 01:26:14.788949  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:14.789431  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:14.789481  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:14.789392  143635 retry.go:31] will retry after 847.129796ms: waiting for machine to come up
	I0420 01:26:15.637863  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:15.638348  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:15.638379  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:15.638288  143635 retry.go:31] will retry after 1.162423805s: waiting for machine to come up
	I0420 01:26:11.866297  142057 pod_ready.go:102] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:13.868499  142057 pod_ready.go:102] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:14.867208  142057 pod_ready.go:92] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.867241  142057 pod_ready.go:81] duration metric: took 5.008488667s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.867254  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.875100  142057 pod_ready.go:92] pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.875119  142057 pod_ready.go:81] duration metric: took 7.856647ms for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.875131  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-crzk6" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.880630  142057 pod_ready.go:92] pod "kube-proxy-crzk6" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.880651  142057 pod_ready.go:81] duration metric: took 5.512379ms for pod "kube-proxy-crzk6" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.880661  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.885625  142057 pod_ready.go:92] pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.885645  142057 pod_ready.go:81] duration metric: took 4.976632ms for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.885656  142057 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.031960  142411 crio.go:462] duration metric: took 2.137532848s to copy over tarball
	I0420 01:26:14.032043  142411 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:26:17.581625  142411 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.549548059s)
	I0420 01:26:17.581660  142411 crio.go:469] duration metric: took 3.549666471s to extract the tarball
	I0420 01:26:17.581672  142411 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:26:17.633172  142411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:17.679514  142411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:26:17.679544  142411 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0420 01:26:17.679710  142411 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.679940  142411 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.680051  142411 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.680061  142411 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.680225  142411 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.680266  142411 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0420 01:26:17.680442  142411 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.680516  142411 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.682336  142411 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.682425  142411 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0420 01:26:17.682428  142411 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.682462  142411 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.682341  142411 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.682512  142411 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.682952  142411 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.682955  142411 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.846602  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.850673  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.866812  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.871983  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.876346  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0420 01:26:17.876745  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.881269  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.985788  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.997662  142411 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0420 01:26:17.997709  142411 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0420 01:26:17.997716  142411 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.997751  142411 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.997778  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:17.997797  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.071610  142411 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0420 01:26:18.071682  142411 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:18.071705  142411 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0420 01:26:18.071741  142411 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:18.071760  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.071793  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.085631  142411 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0420 01:26:18.085689  142411 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0420 01:26:18.085748  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.087239  142411 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0420 01:26:18.087288  142411 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:18.087362  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.094891  142411 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0420 01:26:18.094940  142411 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:18.094989  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.232524  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:18.232595  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:18.232613  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0420 01:26:18.232649  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0420 01:26:18.232595  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:18.232682  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:18.232710  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:14.684499  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:17.185481  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:16.802494  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:16.802977  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:16.803009  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:16.802908  143635 retry.go:31] will retry after 1.370900633s: waiting for machine to come up
	I0420 01:26:18.175474  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:18.175996  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:18.176022  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:18.175943  143635 retry.go:31] will retry after 1.698879408s: waiting for machine to come up
	I0420 01:26:19.876437  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:19.876901  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:19.876932  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:19.876843  143635 retry.go:31] will retry after 2.622833508s: waiting for machine to come up
	I0420 01:26:16.894119  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:18.894941  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:18.408724  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0420 01:26:18.408791  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0420 01:26:18.410041  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0420 01:26:18.410136  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0420 01:26:18.424042  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0420 01:26:18.428203  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0420 01:26:18.428295  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0420 01:26:18.450170  142411 cache_images.go:92] duration metric: took 770.600266ms to LoadCachedImages
	W0420 01:26:18.450288  142411 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0420 01:26:18.450305  142411 kubeadm.go:928] updating node { 192.168.61.91 8443 v1.20.0 crio true true} ...
	I0420 01:26:18.450428  142411 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-564860 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:26:18.450522  142411 ssh_runner.go:195] Run: crio config
	I0420 01:26:18.503362  142411 cni.go:84] Creating CNI manager for ""
	I0420 01:26:18.503407  142411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:18.503427  142411 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:26:18.503463  142411 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-564860 NodeName:old-k8s-version-564860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0420 01:26:18.503671  142411 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-564860"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:26:18.503745  142411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0420 01:26:18.516393  142411 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:26:18.516475  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:26:18.529038  142411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0420 01:26:18.550442  142411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:26:18.572012  142411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0420 01:26:18.595682  142411 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I0420 01:26:18.602036  142411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:18.622226  142411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:18.774466  142411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:26:18.795074  142411 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860 for IP: 192.168.61.91
	I0420 01:26:18.795104  142411 certs.go:194] generating shared ca certs ...
	I0420 01:26:18.795125  142411 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:18.795301  142411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:26:18.795342  142411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:26:18.795352  142411 certs.go:256] generating profile certs ...
	I0420 01:26:18.795433  142411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/client.key
	I0420 01:26:18.795487  142411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key.d235183f
	I0420 01:26:18.795524  142411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key
	I0420 01:26:18.795645  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:26:18.795675  142411 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:26:18.795685  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:26:18.795706  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:26:18.795735  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:26:18.795765  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:26:18.795828  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:18.796607  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:26:18.845581  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:26:18.891065  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:26:18.933536  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:26:18.977381  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0420 01:26:19.009816  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:26:19.042053  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:26:19.090614  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:26:19.119554  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:26:19.147545  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:26:19.177775  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:26:19.211008  142411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:26:19.234399  142411 ssh_runner.go:195] Run: openssl version
	I0420 01:26:19.242808  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:26:19.256132  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.261681  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.261739  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.270546  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:26:19.284112  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:26:19.296998  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.302497  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.302551  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.310883  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:26:19.325130  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:26:19.338964  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.344915  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.344986  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.351926  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:26:19.366428  142411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:26:19.372391  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:26:19.379606  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:26:19.386698  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:26:19.395102  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:26:19.401981  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:26:19.409477  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:26:19.416444  142411 kubeadm.go:391] StartCluster: {Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:26:19.416557  142411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:26:19.416600  142411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:19.460782  142411 cri.go:89] found id: ""
	I0420 01:26:19.460884  142411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:26:19.473812  142411 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:26:19.473832  142411 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:26:19.473838  142411 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:26:19.473899  142411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:26:19.486686  142411 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:26:19.487757  142411 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-564860" does not appear in /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:26:19.488411  142411 kubeconfig.go:62] /home/jenkins/minikube-integration/18703-76456/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-564860" cluster setting kubeconfig missing "old-k8s-version-564860" context setting]
	I0420 01:26:19.489438  142411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:19.491237  142411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:26:19.503483  142411 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.91
	I0420 01:26:19.503519  142411 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:26:19.503530  142411 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:26:19.503597  142411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:19.546350  142411 cri.go:89] found id: ""
	I0420 01:26:19.546438  142411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:26:19.568177  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:26:19.580545  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:26:19.580573  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:26:19.580658  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:26:19.592945  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:26:19.593010  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:26:19.605598  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:26:19.617261  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:26:19.617346  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:26:19.629242  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:26:19.640143  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:26:19.640211  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:26:19.654226  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:26:19.666207  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:26:19.666275  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:26:19.678899  142411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:26:19.694374  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:19.845435  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:20.619142  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:20.891265  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:21.020834  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:21.124545  142411 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:26:21.124652  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:21.625462  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:22.125171  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:22.625565  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:23.125077  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:19.685129  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:22.183561  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:22.502227  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:22.502665  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:22.502696  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:22.502603  143635 retry.go:31] will retry after 3.3877716s: waiting for machine to come up
	I0420 01:26:21.392042  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:23.392579  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:25.394230  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:23.625392  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.125446  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.625035  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:25.125592  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:25.624718  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:26.124803  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:26.625420  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:27.125162  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:27.625475  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:28.125637  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.685014  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:27.182545  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:25.891769  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:25.892321  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:25.892353  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:25.892252  143635 retry.go:31] will retry after 3.395760477s: waiting for machine to come up
	I0420 01:26:29.290361  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:29.290858  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:29.290907  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:29.290791  143635 retry.go:31] will retry after 4.86761736s: waiting for machine to come up
	I0420 01:26:27.892903  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:30.392680  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:28.625781  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.125145  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.625647  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:30.125081  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:30.625404  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:31.124753  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:31.625565  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:32.124750  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:32.624841  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:33.125120  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.682707  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:31.682790  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:33.683549  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:34.162306  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.162883  141746 main.go:141] libmachine: (no-preload-338118) Found IP for machine: 192.168.72.89
	I0420 01:26:34.162912  141746 main.go:141] libmachine: (no-preload-338118) Reserving static IP address...
	I0420 01:26:34.162928  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has current primary IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.163266  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "no-preload-338118", mac: "52:54:00:14:65:26", ip: "192.168.72.89"} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.163296  141746 main.go:141] libmachine: (no-preload-338118) Reserved static IP address: 192.168.72.89
	I0420 01:26:34.163316  141746 main.go:141] libmachine: (no-preload-338118) DBG | skip adding static IP to network mk-no-preload-338118 - found existing host DHCP lease matching {name: "no-preload-338118", mac: "52:54:00:14:65:26", ip: "192.168.72.89"}
	I0420 01:26:34.163335  141746 main.go:141] libmachine: (no-preload-338118) DBG | Getting to WaitForSSH function...
	I0420 01:26:34.163350  141746 main.go:141] libmachine: (no-preload-338118) Waiting for SSH to be available...
	I0420 01:26:34.165641  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.165947  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.165967  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.166136  141746 main.go:141] libmachine: (no-preload-338118) DBG | Using SSH client type: external
	I0420 01:26:34.166161  141746 main.go:141] libmachine: (no-preload-338118) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa (-rw-------)
	I0420 01:26:34.166190  141746 main.go:141] libmachine: (no-preload-338118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:26:34.166216  141746 main.go:141] libmachine: (no-preload-338118) DBG | About to run SSH command:
	I0420 01:26:34.166232  141746 main.go:141] libmachine: (no-preload-338118) DBG | exit 0
	I0420 01:26:34.293435  141746 main.go:141] libmachine: (no-preload-338118) DBG | SSH cmd err, output: <nil>: 
	I0420 01:26:34.293789  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetConfigRaw
	I0420 01:26:34.294381  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:34.296958  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.297355  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.297391  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.297670  141746 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/config.json ...
	I0420 01:26:34.297915  141746 machine.go:94] provisionDockerMachine start ...
	I0420 01:26:34.297945  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:34.298191  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.300645  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.301042  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.301068  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.301280  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.301496  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.301719  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.301895  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.302104  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.302272  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.302284  141746 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:26:34.419082  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:26:34.419113  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.419424  141746 buildroot.go:166] provisioning hostname "no-preload-338118"
	I0420 01:26:34.419452  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.419715  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.422630  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.423010  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.423052  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.423212  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.423415  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.423599  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.423716  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.423928  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.424135  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.424149  141746 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-338118 && echo "no-preload-338118" | sudo tee /etc/hostname
	I0420 01:26:34.555223  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-338118
	
	I0420 01:26:34.555254  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.558217  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.558606  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.558643  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.558792  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.558999  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.559241  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.559423  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.559655  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.559827  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.559844  141746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-338118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-338118/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-338118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:26:34.684192  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:26:34.684226  141746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:26:34.684261  141746 buildroot.go:174] setting up certificates
	I0420 01:26:34.684270  141746 provision.go:84] configureAuth start
	I0420 01:26:34.684289  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.684581  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:34.687363  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.687703  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.687733  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.687876  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.690220  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.690542  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.690569  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.690739  141746 provision.go:143] copyHostCerts
	I0420 01:26:34.690806  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:26:34.690817  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:26:34.690869  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:26:34.691006  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:26:34.691017  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:26:34.691038  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:26:34.691103  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:26:34.691111  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:26:34.691130  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:26:34.691178  141746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.no-preload-338118 san=[127.0.0.1 192.168.72.89 localhost minikube no-preload-338118]
	I0420 01:26:34.899595  141746 provision.go:177] copyRemoteCerts
	I0420 01:26:34.899652  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:26:34.899676  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.902298  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.902745  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.902777  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.902956  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.903150  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.903309  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.903457  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:34.993263  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:26:35.024837  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0420 01:26:35.054254  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 01:26:35.082455  141746 provision.go:87] duration metric: took 398.171071ms to configureAuth
	I0420 01:26:35.082488  141746 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:26:35.082741  141746 config.go:182] Loaded profile config "no-preload-338118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:26:35.082822  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.085868  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.086264  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.086313  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.086481  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.086708  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.086868  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.087051  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.087254  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:35.087424  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:35.087440  141746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:26:35.374277  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:26:35.374305  141746 machine.go:97] duration metric: took 1.076369907s to provisionDockerMachine
	I0420 01:26:35.374327  141746 start.go:293] postStartSetup for "no-preload-338118" (driver="kvm2")
	I0420 01:26:35.374342  141746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:26:35.374366  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.374733  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:26:35.374787  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.378647  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.378998  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.379038  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.379149  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.379353  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.379518  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.379694  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:35.468711  141746 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:26:35.473783  141746 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:26:35.473808  141746 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:26:35.473929  141746 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:26:35.474088  141746 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:26:35.474217  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:26:35.484161  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:35.511695  141746 start.go:296] duration metric: took 137.354669ms for postStartSetup
	I0420 01:26:35.511751  141746 fix.go:56] duration metric: took 25.320502022s for fixHost
	I0420 01:26:35.511780  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.514635  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.515042  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.515067  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.515247  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.515448  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.515663  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.515814  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.515988  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:35.516218  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:35.516240  141746 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:26:35.632029  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576395.615634246
	
	I0420 01:26:35.632057  141746 fix.go:216] guest clock: 1713576395.615634246
	I0420 01:26:35.632067  141746 fix.go:229] Guest: 2024-04-20 01:26:35.615634246 +0000 UTC Remote: 2024-04-20 01:26:35.511757232 +0000 UTC m=+369.861721674 (delta=103.877014ms)
	I0420 01:26:35.632113  141746 fix.go:200] guest clock delta is within tolerance: 103.877014ms
	I0420 01:26:35.632137  141746 start.go:83] releasing machines lock for "no-preload-338118", held for 25.440933699s
	I0420 01:26:35.632168  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.632486  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:35.635888  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.636400  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.636440  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.636751  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637250  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637448  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637547  141746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:26:35.637597  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.637694  141746 ssh_runner.go:195] Run: cat /version.json
	I0420 01:26:35.637720  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.640562  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.640800  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.640953  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.640969  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.641244  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.641389  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.641433  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.641486  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.641644  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.641670  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.641806  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:35.641873  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.641997  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.642163  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:32.892859  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:34.893134  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:35.749528  141746 ssh_runner.go:195] Run: systemctl --version
	I0420 01:26:35.756960  141746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:26:35.912075  141746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:26:35.920264  141746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:26:35.920355  141746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:26:35.937729  141746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:26:35.937753  141746 start.go:494] detecting cgroup driver to use...
	I0420 01:26:35.937811  141746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:26:35.954425  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:26:35.970967  141746 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:26:35.971023  141746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:26:35.986186  141746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:26:36.000803  141746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:26:36.114673  141746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:26:36.273386  141746 docker.go:233] disabling docker service ...
	I0420 01:26:36.273472  141746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:26:36.290471  141746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:26:36.305722  141746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:26:36.459528  141746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:26:36.609105  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:26:36.627255  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:26:36.651459  141746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:26:36.651535  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.663171  141746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:26:36.663255  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.674706  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.686196  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.697909  141746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:26:36.709625  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.720746  141746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.740333  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.752898  141746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:26:36.764600  141746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:26:36.764653  141746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:26:36.780697  141746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:26:36.791440  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:36.936761  141746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:26:37.095374  141746 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:26:37.095475  141746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:26:37.101601  141746 start.go:562] Will wait 60s for crictl version
	I0420 01:26:37.101673  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.106191  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:26:37.152257  141746 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:26:37.152361  141746 ssh_runner.go:195] Run: crio --version
	I0420 01:26:37.187172  141746 ssh_runner.go:195] Run: crio --version
	I0420 01:26:37.225203  141746 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:26:33.625596  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:34.124972  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:34.624791  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:35.125630  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:35.624815  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.125677  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.625631  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:37.125592  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:37.624883  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:38.124924  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.183893  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:38.184381  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:37.226708  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:37.229679  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:37.230090  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:37.230131  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:37.230253  141746 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0420 01:26:37.234914  141746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:37.249029  141746 kubeadm.go:877] updating cluster {Name:no-preload-338118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:26:37.249155  141746 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:26:37.249208  141746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:37.287235  141746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:26:37.287270  141746 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0420 01:26:37.287341  141746 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.287379  141746 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.287387  141746 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.287363  141746 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.287414  141746 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.287378  141746 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.287399  141746 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.287365  141746 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0420 01:26:37.288833  141746 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.288849  141746 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.288863  141746 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.288922  141746 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.288933  141746 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.288831  141746 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.288957  141746 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0420 01:26:37.288985  141746 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.452705  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.462178  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.463495  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0420 01:26:37.469562  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.480726  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.501069  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.517291  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.533934  141746 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0420 01:26:37.533976  141746 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.534032  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.578341  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.602332  141746 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0420 01:26:37.602381  141746 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.602432  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.718979  141746 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0420 01:26:37.719028  141746 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0420 01:26:37.719065  141746 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0420 01:26:37.719093  141746 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.719100  141746 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0420 01:26:37.719126  141746 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.719153  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719220  141746 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0420 01:26:37.719256  141746 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.719067  141746 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.719155  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719306  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.719309  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719036  141746 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.719369  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719154  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.719297  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.733974  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.802462  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.802496  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.802544  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0420 01:26:37.802575  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0420 01:26:37.802637  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.802648  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:37.802648  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.802708  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.802725  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0420 01:26:37.802788  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:37.897150  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0420 01:26:37.897190  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0420 01:26:37.897259  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:37.897268  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0420 01:26:37.897278  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:37.897285  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.897295  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0420 01:26:37.897337  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.902046  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0420 01:26:37.902094  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0420 01:26:37.902151  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:37.902307  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0420 01:26:37.902399  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:37.914016  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0420 01:26:40.184815  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.287511777s)
	I0420 01:26:40.184859  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0420 01:26:40.184918  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.282742718s)
	I0420 01:26:40.184951  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.282534359s)
	I0420 01:26:40.184974  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0420 01:26:40.184981  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0420 01:26:40.185052  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.287690505s)
	I0420 01:26:40.185081  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0420 01:26:40.185113  141746 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:40.185175  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:37.392757  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:39.394094  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:38.624766  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:39.125330  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:39.624953  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.125409  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.625125  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:41.125460  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:41.625041  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:42.125103  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:42.624948  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:43.125237  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.186531  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:42.683524  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:42.252666  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.067465398s)
	I0420 01:26:42.252710  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0420 01:26:42.252735  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:42.252774  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:44.616564  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.363755421s)
	I0420 01:26:44.616614  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0420 01:26:44.616649  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:44.616713  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:41.394300  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:43.895493  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:43.625155  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:44.124986  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:44.624957  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.125834  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.625359  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:46.125706  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:46.625115  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:47.125204  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:47.625746  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:48.124803  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.183628  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:47.684002  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:46.894590  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.277850916s)
	I0420 01:26:46.894626  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0420 01:26:46.894655  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:46.894712  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:49.158327  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.263583483s)
	I0420 01:26:49.158370  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0420 01:26:49.158406  141746 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:49.158478  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:50.223297  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.06478687s)
	I0420 01:26:50.223344  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0420 01:26:50.223382  141746 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:50.223452  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:46.393020  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:48.394414  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:50.893840  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:48.624957  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:49.125441  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:49.625078  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.124787  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.624817  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:51.125211  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:51.625408  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:52.124903  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:52.624826  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:53.124728  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.183173  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:52.183563  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:54.187354  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.963876859s)
	I0420 01:26:54.187388  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0420 01:26:54.187416  141746 cache_images.go:123] Successfully loaded all cached images
	I0420 01:26:54.187426  141746 cache_images.go:92] duration metric: took 16.900140079s to LoadCachedImages
	I0420 01:26:54.187439  141746 kubeadm.go:928] updating node { 192.168.72.89 8443 v1.30.0 crio true true} ...
	I0420 01:26:54.187545  141746 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-338118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:26:54.187608  141746 ssh_runner.go:195] Run: crio config
	I0420 01:26:54.245888  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:26:54.245914  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:54.245928  141746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:26:54.245954  141746 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.89 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-338118 NodeName:no-preload-338118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:26:54.246153  141746 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-338118"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:26:54.246232  141746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:26:54.259262  141746 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:26:54.259360  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:26:54.270769  141746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0420 01:26:54.290436  141746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:26:54.311846  141746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0420 01:26:54.332517  141746 ssh_runner.go:195] Run: grep 192.168.72.89	control-plane.minikube.internal$ /etc/hosts
	I0420 01:26:54.336874  141746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:54.350084  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:54.466328  141746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:26:54.484511  141746 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118 for IP: 192.168.72.89
	I0420 01:26:54.484545  141746 certs.go:194] generating shared ca certs ...
	I0420 01:26:54.484609  141746 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:54.484846  141746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:26:54.484960  141746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:26:54.484996  141746 certs.go:256] generating profile certs ...
	I0420 01:26:54.485165  141746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/client.key
	I0420 01:26:54.485273  141746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.key.f8d917a4
	I0420 01:26:54.485353  141746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.key
	I0420 01:26:54.485543  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:26:54.485604  141746 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:26:54.485622  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:26:54.485667  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:26:54.485707  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:26:54.485741  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:26:54.485804  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:54.486486  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:26:54.539867  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:26:54.575443  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:26:54.609857  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:26:54.638338  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0420 01:26:54.672043  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:26:54.704197  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:26:54.733771  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0420 01:26:54.761911  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:26:54.789278  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:26:54.816890  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:26:54.845884  141746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:26:54.864508  141746 ssh_runner.go:195] Run: openssl version
	I0420 01:26:54.870717  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:26:54.883192  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.888532  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.888588  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.895258  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:26:54.907346  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:26:54.919360  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.924700  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.924773  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.931133  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:26:54.942845  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:26:54.954785  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.959769  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.959856  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.966061  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:26:54.978389  141746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:26:54.983591  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:26:54.990157  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:26:54.996977  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:26:55.004103  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:26:55.010928  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:26:55.018024  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:26:55.024639  141746 kubeadm.go:391] StartCluster: {Name:no-preload-338118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:26:55.024733  141746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:26:55.024784  141746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:55.073888  141746 cri.go:89] found id: ""
	I0420 01:26:55.073954  141746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:26:55.087179  141746 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:26:55.087199  141746 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:26:55.087208  141746 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:26:55.087255  141746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:26:55.098975  141746 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:26:55.100487  141746 kubeconfig.go:125] found "no-preload-338118" server: "https://192.168.72.89:8443"
	I0420 01:26:55.103557  141746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:26:55.114871  141746 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.89
	I0420 01:26:55.114900  141746 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:26:55.114914  141746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:26:55.114983  141746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:55.174863  141746 cri.go:89] found id: ""
	I0420 01:26:55.174969  141746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:26:55.192867  141746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:26:55.203842  141746 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:26:55.203866  141746 kubeadm.go:156] found existing configuration files:
	
	I0420 01:26:55.203919  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:26:55.214476  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:26:55.214534  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:26:55.224728  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:26:55.235353  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:26:55.235403  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:26:55.245905  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:26:55.256614  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:26:55.256678  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:26:55.266909  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:26:55.276249  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:26:55.276294  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:26:55.285758  141746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:26:55.295896  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:55.418331  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:53.394623  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:55.893492  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:53.625614  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.125487  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.625414  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:55.125150  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:55.624831  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:56.125438  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:56.625450  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.125591  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.625757  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:58.124963  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.186686  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:56.681991  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:58.682958  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:56.156484  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.376987  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.450655  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.517915  141746 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:26:56.518018  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.018277  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.518215  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.538017  141746 api_server.go:72] duration metric: took 1.020104679s to wait for apiserver process to appear ...
	I0420 01:26:57.538045  141746 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:26:57.538070  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:26:58.392944  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:00.892688  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:58.625549  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:59.125177  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:59.624704  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:00.125709  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:00.625346  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.124849  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.624947  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:02.125407  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:02.625704  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:03.125695  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.182564  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:03.183451  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:02.538442  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:02.538498  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:03.396891  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:05.896375  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:03.625423  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:04.124806  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:04.625232  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.124917  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.624983  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:06.124851  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:06.625029  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:07.125554  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:07.625163  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:08.125455  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.682216  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:07.683636  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:07.538926  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:07.538973  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:08.392765  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:10.392933  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:08.625100  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:09.125395  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:09.625454  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.125615  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.624892  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:11.125366  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:11.625074  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:12.125165  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:12.625629  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:13.124824  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.182884  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:12.683893  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:12.540046  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:12.540121  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:12.393561  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:14.893756  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:13.625040  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:14.125511  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:14.624890  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.125622  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.625393  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:16.125215  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:16.625561  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:17.125263  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:17.624772  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:18.125597  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.183734  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:17.683742  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:17.540652  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:17.540701  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.076616  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": read tcp 192.168.72.1:34174->192.168.72.89:8443: read: connection reset by peer
	I0420 01:27:18.076671  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.077186  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": dial tcp 192.168.72.89:8443: connect: connection refused
	I0420 01:27:18.538798  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.539454  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": dial tcp 192.168.72.89:8443: connect: connection refused
	I0420 01:27:19.039080  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:17.393196  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:19.395273  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:18.624948  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:19.124956  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:19.625579  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:20.124827  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:20.625212  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:21.125476  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:21.125553  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:21.174633  142411 cri.go:89] found id: ""
	I0420 01:27:21.174668  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.174679  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:21.174686  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:21.174767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:21.218230  142411 cri.go:89] found id: ""
	I0420 01:27:21.218263  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.218275  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:21.218284  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:21.218369  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:21.258886  142411 cri.go:89] found id: ""
	I0420 01:27:21.258916  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.258926  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:21.258932  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:21.259003  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:21.306725  142411 cri.go:89] found id: ""
	I0420 01:27:21.306758  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.306769  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:21.306777  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:21.306843  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:21.349049  142411 cri.go:89] found id: ""
	I0420 01:27:21.349086  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.349098  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:21.349106  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:21.349174  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:21.392312  142411 cri.go:89] found id: ""
	I0420 01:27:21.392338  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.392346  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:21.392352  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:21.392425  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:21.434121  142411 cri.go:89] found id: ""
	I0420 01:27:21.434148  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.434156  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:21.434162  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:21.434210  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:21.473728  142411 cri.go:89] found id: ""
	I0420 01:27:21.473754  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.473762  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:21.473772  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:21.473785  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:21.537607  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:21.537648  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:21.554563  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:21.554604  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:21.674778  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:21.674803  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:21.674829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:21.740625  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:21.740666  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:20.182461  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:22.682574  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:24.039641  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:24.039690  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:21.397381  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:23.893642  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:24.284890  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:24.301486  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:24.301571  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:24.340987  142411 cri.go:89] found id: ""
	I0420 01:27:24.341012  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.341021  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:24.341026  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:24.341102  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:24.379983  142411 cri.go:89] found id: ""
	I0420 01:27:24.380014  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.380024  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:24.380029  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:24.380113  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:24.438700  142411 cri.go:89] found id: ""
	I0420 01:27:24.438729  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.438739  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:24.438745  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:24.438795  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:24.487761  142411 cri.go:89] found id: ""
	I0420 01:27:24.487793  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.487802  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:24.487808  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:24.487870  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:24.529408  142411 cri.go:89] found id: ""
	I0420 01:27:24.529439  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.529448  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:24.529453  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:24.529523  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:24.572782  142411 cri.go:89] found id: ""
	I0420 01:27:24.572817  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.572831  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:24.572841  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:24.572910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:24.620651  142411 cri.go:89] found id: ""
	I0420 01:27:24.620684  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.620696  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:24.620704  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:24.620769  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:24.659481  142411 cri.go:89] found id: ""
	I0420 01:27:24.659513  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.659525  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:24.659537  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:24.659552  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:24.714483  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:24.714517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:24.730279  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:24.730316  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:24.804883  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:24.804909  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:24.804926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:24.879557  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:24.879602  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:27.431026  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:27.448112  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:27.448176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:27.494959  142411 cri.go:89] found id: ""
	I0420 01:27:27.494988  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.494999  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:27.495007  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:27.495075  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:27.532023  142411 cri.go:89] found id: ""
	I0420 01:27:27.532055  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.532066  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:27.532075  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:27.532151  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:27.578551  142411 cri.go:89] found id: ""
	I0420 01:27:27.578600  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.578613  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:27.578621  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:27.578692  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:27.618248  142411 cri.go:89] found id: ""
	I0420 01:27:27.618277  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.618288  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:27.618296  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:27.618363  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:27.655682  142411 cri.go:89] found id: ""
	I0420 01:27:27.655714  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.655723  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:27.655729  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:27.655787  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:27.696355  142411 cri.go:89] found id: ""
	I0420 01:27:27.696389  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.696400  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:27.696408  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:27.696478  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:27.735354  142411 cri.go:89] found id: ""
	I0420 01:27:27.735378  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.735396  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:27.735402  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:27.735460  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:27.775234  142411 cri.go:89] found id: ""
	I0420 01:27:27.775261  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.775269  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:27.775277  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:27.775294  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:27.789970  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:27.790005  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:27.873345  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:27.873371  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:27.873387  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:27.952309  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:27.952353  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:28.003746  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:28.003792  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:24.683122  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:27.182311  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:29.040691  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:29.040743  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:26.394161  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:28.893349  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:30.893785  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:30.555691  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:30.570962  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:30.571041  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:30.613185  142411 cri.go:89] found id: ""
	I0420 01:27:30.613218  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.613227  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:30.613233  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:30.613291  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:30.654494  142411 cri.go:89] found id: ""
	I0420 01:27:30.654520  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.654529  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:30.654535  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:30.654600  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:30.702605  142411 cri.go:89] found id: ""
	I0420 01:27:30.702634  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.702646  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:30.702653  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:30.702719  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:30.742072  142411 cri.go:89] found id: ""
	I0420 01:27:30.742104  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.742115  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:30.742123  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:30.742191  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:30.793199  142411 cri.go:89] found id: ""
	I0420 01:27:30.793232  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.793244  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:30.793252  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:30.793340  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:30.832978  142411 cri.go:89] found id: ""
	I0420 01:27:30.833019  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.833034  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:30.833044  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:30.833126  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:30.875606  142411 cri.go:89] found id: ""
	I0420 01:27:30.875641  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.875655  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:30.875662  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:30.875729  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:30.917288  142411 cri.go:89] found id: ""
	I0420 01:27:30.917335  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.917348  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:30.917360  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:30.917375  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:30.996446  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:30.996469  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:30.996485  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:31.080494  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:31.080543  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:31.141226  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:31.141260  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:31.212808  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:31.212845  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:29.182651  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:31.183179  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:33.682476  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:34.041737  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:34.041789  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:33.393756  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:35.395120  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:33.728927  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:33.745749  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:33.745835  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:33.788813  142411 cri.go:89] found id: ""
	I0420 01:27:33.788845  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.788859  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:33.788868  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:33.788936  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:33.834918  142411 cri.go:89] found id: ""
	I0420 01:27:33.834948  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.834957  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:33.834963  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:33.835026  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:33.873928  142411 cri.go:89] found id: ""
	I0420 01:27:33.873960  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.873972  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:33.873977  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:33.874027  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:33.921462  142411 cri.go:89] found id: ""
	I0420 01:27:33.921497  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.921510  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:33.921519  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:33.921606  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:33.962280  142411 cri.go:89] found id: ""
	I0420 01:27:33.962308  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.962320  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:33.962329  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:33.962390  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:34.002582  142411 cri.go:89] found id: ""
	I0420 01:27:34.002616  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.002627  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:34.002635  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:34.002707  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:34.047383  142411 cri.go:89] found id: ""
	I0420 01:27:34.047410  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.047421  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:34.047428  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:34.047489  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:34.088296  142411 cri.go:89] found id: ""
	I0420 01:27:34.088341  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.088352  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:34.088364  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:34.088381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:34.180338  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:34.180380  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:34.224386  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:34.224422  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:34.278451  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:34.278488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:34.294377  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:34.294409  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:34.377115  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:36.878000  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:36.896875  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:36.896953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:36.953915  142411 cri.go:89] found id: ""
	I0420 01:27:36.953954  142411 logs.go:276] 0 containers: []
	W0420 01:27:36.953968  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:36.953977  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:36.954056  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:36.998223  142411 cri.go:89] found id: ""
	I0420 01:27:36.998250  142411 logs.go:276] 0 containers: []
	W0420 01:27:36.998260  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:36.998268  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:36.998337  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:37.069299  142411 cri.go:89] found id: ""
	I0420 01:27:37.069346  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.069358  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:37.069366  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:37.069436  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:37.112068  142411 cri.go:89] found id: ""
	I0420 01:27:37.112100  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.112112  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:37.112119  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:37.112175  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:37.155883  142411 cri.go:89] found id: ""
	I0420 01:27:37.155913  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.155924  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:37.155933  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:37.156006  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:37.200979  142411 cri.go:89] found id: ""
	I0420 01:27:37.201007  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.201018  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:37.201026  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:37.201091  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:37.241639  142411 cri.go:89] found id: ""
	I0420 01:27:37.241667  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.241678  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:37.241686  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:37.241748  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:37.281845  142411 cri.go:89] found id: ""
	I0420 01:27:37.281883  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.281894  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:37.281907  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:37.281923  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:37.327428  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:37.327463  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:37.385213  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:37.385248  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:37.400158  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:37.400190  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:37.476662  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:37.476687  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:37.476700  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:37.090819  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:27:37.090858  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:27:37.090877  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:37.124020  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:37.124076  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:37.538389  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:37.550894  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:37.550930  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:38.038486  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:38.051983  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:38.052019  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:38.538297  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:38.544961  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 200:
	ok
	I0420 01:27:38.553038  141746 api_server.go:141] control plane version: v1.30.0
	I0420 01:27:38.553065  141746 api_server.go:131] duration metric: took 41.015012791s to wait for apiserver health ...
	I0420 01:27:38.553075  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:27:38.553081  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:27:38.554687  141746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:27:35.684396  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:38.183391  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:38.555934  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:27:38.575384  141746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:27:38.609934  141746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:27:38.637152  141746 system_pods.go:59] 8 kube-system pods found
	I0420 01:27:38.637184  141746 system_pods.go:61] "coredns-7db6d8ff4d-r2hs7" [981840a2-82cd-49e0-8d4f-fbaf05290668] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:27:38.637191  141746 system_pods.go:61] "etcd-no-preload-338118" [92fc0da4-63d3-4f34-a5a6-27b73e7e210d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:27:38.637198  141746 system_pods.go:61] "kube-apiserver-no-preload-338118" [9f7bd5df-f733-4944-9ad2-0c9f0ea4529b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:27:38.637206  141746 system_pods.go:61] "kube-controller-manager-no-preload-338118" [d7a0bd6a-2cd0-4b27-ae83-ae38c1a20c63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:27:38.637215  141746 system_pods.go:61] "kube-proxy-zgq86" [d379ae65-c579-47e4-b055-6512e74868a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0420 01:27:38.637219  141746 system_pods.go:61] "kube-scheduler-no-preload-338118" [99558213-289d-4682-ba8e-20175c815563] Running
	I0420 01:27:38.637225  141746 system_pods.go:61] "metrics-server-569cc877fc-lcbcz" [1d2b716a-555a-46aa-ae27-c40553c94288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:27:38.637229  141746 system_pods.go:61] "storage-provisioner" [a8316010-8689-42aa-9741-227bf55a16bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:27:38.637236  141746 system_pods.go:74] duration metric: took 27.280844ms to wait for pod list to return data ...
	I0420 01:27:38.637243  141746 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:27:38.640744  141746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:27:38.640774  141746 node_conditions.go:123] node cpu capacity is 2
	I0420 01:27:38.640791  141746 node_conditions.go:105] duration metric: took 3.542872ms to run NodePressure ...
	I0420 01:27:38.640813  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:27:38.979785  141746 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:27:38.987541  141746 kubeadm.go:733] kubelet initialised
	I0420 01:27:38.987570  141746 kubeadm.go:734] duration metric: took 7.752383ms waiting for restarted kubelet to initialise ...
	I0420 01:27:38.987582  141746 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:27:38.994929  141746 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:38.999872  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:38.999903  141746 pod_ready.go:81] duration metric: took 4.940439ms for pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:38.999915  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:38.999923  141746 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.004575  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "etcd-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.004595  141746 pod_ready.go:81] duration metric: took 4.662163ms for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.004603  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "etcd-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.004608  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.012365  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "kube-apiserver-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.012386  141746 pod_ready.go:81] duration metric: took 7.773001ms for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.012393  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "kube-apiserver-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.012400  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.019091  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.019125  141746 pod_ready.go:81] duration metric: took 6.70398ms for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.019137  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.019146  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zgq86" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:37.894228  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:39.899004  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:40.075888  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:40.091313  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:40.091389  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:40.134013  142411 cri.go:89] found id: ""
	I0420 01:27:40.134039  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.134048  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:40.134053  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:40.134136  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:40.182108  142411 cri.go:89] found id: ""
	I0420 01:27:40.182140  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.182151  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:40.182158  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:40.182222  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:40.225406  142411 cri.go:89] found id: ""
	I0420 01:27:40.225438  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.225447  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:40.225453  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:40.225539  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:40.267599  142411 cri.go:89] found id: ""
	I0420 01:27:40.267627  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.267636  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:40.267645  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:40.267790  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:40.309385  142411 cri.go:89] found id: ""
	I0420 01:27:40.309418  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.309439  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:40.309448  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:40.309525  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:40.351947  142411 cri.go:89] found id: ""
	I0420 01:27:40.351980  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.351993  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:40.352003  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:40.352079  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:40.395583  142411 cri.go:89] found id: ""
	I0420 01:27:40.395614  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.395623  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:40.395629  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:40.395692  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:40.441348  142411 cri.go:89] found id: ""
	I0420 01:27:40.441397  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.441412  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:40.441426  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:40.441445  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:40.498231  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:40.498268  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:40.514550  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:40.514578  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:40.593580  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:40.593614  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:40.593631  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:40.671736  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:40.671778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:43.224892  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:43.240876  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:43.240939  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:43.281583  142411 cri.go:89] found id: ""
	I0420 01:27:43.281621  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.281634  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:43.281643  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:43.281705  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:43.321079  142411 cri.go:89] found id: ""
	I0420 01:27:43.321115  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.321125  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:43.321132  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:43.321277  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:43.365827  142411 cri.go:89] found id: ""
	I0420 01:27:43.365855  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.365864  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:43.365870  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:43.365921  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:40.184872  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:42.683826  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:41.025729  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:43.025868  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:45.526436  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:42.393681  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:44.401124  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:43.404317  142411 cri.go:89] found id: ""
	I0420 01:27:43.404349  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.404361  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:43.404370  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:43.404443  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:43.449268  142411 cri.go:89] found id: ""
	I0420 01:27:43.449299  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.449323  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:43.449331  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:43.449408  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:43.487782  142411 cri.go:89] found id: ""
	I0420 01:27:43.487829  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.487837  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:43.487844  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:43.487909  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:43.526650  142411 cri.go:89] found id: ""
	I0420 01:27:43.526677  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.526688  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:43.526695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:43.526755  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:43.565288  142411 cri.go:89] found id: ""
	I0420 01:27:43.565328  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.565340  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:43.565352  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:43.565368  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:43.618013  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:43.618046  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:43.634064  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:43.634101  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:43.710633  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:43.710663  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:43.710679  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:43.796658  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:43.796709  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:46.352329  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:46.366848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:46.366935  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:46.413643  142411 cri.go:89] found id: ""
	I0420 01:27:46.413676  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.413687  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:46.413695  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:46.413762  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:46.457976  142411 cri.go:89] found id: ""
	I0420 01:27:46.458002  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.458011  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:46.458020  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:46.458086  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:46.500291  142411 cri.go:89] found id: ""
	I0420 01:27:46.500317  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.500328  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:46.500334  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:46.500398  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:46.541279  142411 cri.go:89] found id: ""
	I0420 01:27:46.541331  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.541343  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:46.541359  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:46.541442  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:46.585613  142411 cri.go:89] found id: ""
	I0420 01:27:46.585642  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.585654  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:46.585661  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:46.585726  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:46.634400  142411 cri.go:89] found id: ""
	I0420 01:27:46.634430  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.634441  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:46.634450  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:46.634534  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:46.676276  142411 cri.go:89] found id: ""
	I0420 01:27:46.676305  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.676313  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:46.676320  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:46.676380  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:46.719323  142411 cri.go:89] found id: ""
	I0420 01:27:46.719356  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.719369  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:46.719381  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:46.719398  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:46.799735  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:46.799765  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:46.799790  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:46.878323  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:46.878371  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:46.931870  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:46.931902  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:46.983217  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:46.983250  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:45.182485  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:47.183499  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:47.526708  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:50.034262  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:46.897249  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:49.393599  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:49.500147  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:49.517380  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:49.517461  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:49.561300  142411 cri.go:89] found id: ""
	I0420 01:27:49.561347  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.561358  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:49.561365  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:49.561432  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:49.604569  142411 cri.go:89] found id: ""
	I0420 01:27:49.604594  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.604608  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:49.604614  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:49.604664  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:49.644952  142411 cri.go:89] found id: ""
	I0420 01:27:49.644983  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.644999  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:49.645006  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:49.645071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:49.694719  142411 cri.go:89] found id: ""
	I0420 01:27:49.694749  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.694757  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:49.694764  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:49.694815  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:49.743821  142411 cri.go:89] found id: ""
	I0420 01:27:49.743849  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.743857  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:49.743865  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:49.743936  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:49.789125  142411 cri.go:89] found id: ""
	I0420 01:27:49.789152  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.789161  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:49.789167  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:49.789233  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:49.828794  142411 cri.go:89] found id: ""
	I0420 01:27:49.828829  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.828841  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:49.828848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:49.828913  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:49.873335  142411 cri.go:89] found id: ""
	I0420 01:27:49.873366  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.873375  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:49.873385  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:49.873397  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:49.930590  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:49.930632  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:49.946850  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:49.946889  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:50.039200  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:50.039220  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:50.039236  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:50.122067  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:50.122118  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:52.664342  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:52.682978  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:52.683061  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:52.733806  142411 cri.go:89] found id: ""
	I0420 01:27:52.733836  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.733848  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:52.733855  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:52.733921  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:52.785977  142411 cri.go:89] found id: ""
	I0420 01:27:52.786008  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.786020  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:52.786027  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:52.786092  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:52.826957  142411 cri.go:89] found id: ""
	I0420 01:27:52.826987  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.826995  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:52.827001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:52.827056  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:52.876208  142411 cri.go:89] found id: ""
	I0420 01:27:52.876251  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.876265  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:52.876276  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:52.876354  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:52.918629  142411 cri.go:89] found id: ""
	I0420 01:27:52.918666  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.918679  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:52.918687  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:52.918767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:52.967604  142411 cri.go:89] found id: ""
	I0420 01:27:52.967646  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.967655  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:52.967661  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:52.967729  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:53.010948  142411 cri.go:89] found id: ""
	I0420 01:27:53.010975  142411 logs.go:276] 0 containers: []
	W0420 01:27:53.010983  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:53.010988  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:53.011039  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:53.055569  142411 cri.go:89] found id: ""
	I0420 01:27:53.055594  142411 logs.go:276] 0 containers: []
	W0420 01:27:53.055611  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:53.055620  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:53.055633  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:53.071038  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:53.071067  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:53.151334  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:53.151364  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:53.151381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:53.238509  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:53.238553  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:53.284898  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:53.284945  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:49.183562  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.682524  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:53.684003  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.027739  141746 pod_ready.go:92] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"True"
	I0420 01:27:51.027773  141746 pod_ready.go:81] duration metric: took 12.008613872s for pod "kube-proxy-zgq86" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.027785  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.033100  141746 pod_ready.go:92] pod "kube-scheduler-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:27:51.033124  141746 pod_ready.go:81] duration metric: took 5.331694ms for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.033136  141746 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:53.041387  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:55.542345  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.896822  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:54.395015  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:55.843065  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:55.856928  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:55.857001  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:55.903058  142411 cri.go:89] found id: ""
	I0420 01:27:55.903092  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.903103  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:55.903111  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:55.903170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:55.944369  142411 cri.go:89] found id: ""
	I0420 01:27:55.944402  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.944414  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:55.944421  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:55.944474  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:55.983485  142411 cri.go:89] found id: ""
	I0420 01:27:55.983510  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.983517  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:55.983523  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:55.983571  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:56.021931  142411 cri.go:89] found id: ""
	I0420 01:27:56.021956  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.021964  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:56.021970  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:56.022019  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:56.066671  142411 cri.go:89] found id: ""
	I0420 01:27:56.066705  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.066717  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:56.066724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:56.066788  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:56.107724  142411 cri.go:89] found id: ""
	I0420 01:27:56.107783  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.107794  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:56.107800  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:56.107854  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:56.149201  142411 cri.go:89] found id: ""
	I0420 01:27:56.149234  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.149246  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:56.149255  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:56.149328  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:56.189580  142411 cri.go:89] found id: ""
	I0420 01:27:56.189621  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.189633  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:56.189645  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:56.189661  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:56.243425  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:56.243462  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:56.261043  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:56.261079  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:56.341944  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:56.341967  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:56.341980  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:56.423252  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:56.423294  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:55.684408  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.183545  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:57.542492  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:00.040617  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:56.892991  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.893124  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:00.893660  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.968894  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:58.984559  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:58.984648  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:59.021603  142411 cri.go:89] found id: ""
	I0420 01:27:59.021634  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.021655  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:59.021666  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:59.021756  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:59.061592  142411 cri.go:89] found id: ""
	I0420 01:27:59.061626  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.061642  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:59.061649  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:59.061701  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:59.101956  142411 cri.go:89] found id: ""
	I0420 01:27:59.101986  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.101996  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:59.102003  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:59.102072  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:59.141104  142411 cri.go:89] found id: ""
	I0420 01:27:59.141136  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.141145  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:59.141151  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:59.141221  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:59.188973  142411 cri.go:89] found id: ""
	I0420 01:27:59.189005  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.189014  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:59.189022  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:59.189107  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:59.232598  142411 cri.go:89] found id: ""
	I0420 01:27:59.232632  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.232641  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:59.232647  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:59.232704  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:59.272623  142411 cri.go:89] found id: ""
	I0420 01:27:59.272660  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.272669  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:59.272675  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:59.272739  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:59.309951  142411 cri.go:89] found id: ""
	I0420 01:27:59.309977  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.309984  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:59.309994  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:59.310005  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:59.366589  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:59.366626  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:59.382724  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:59.382756  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:59.461072  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:59.461102  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:59.461122  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:59.544736  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:59.544769  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:02.089118  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:02.105402  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:02.105483  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:02.144665  142411 cri.go:89] found id: ""
	I0420 01:28:02.144691  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.144700  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:02.144706  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:02.144759  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:02.187471  142411 cri.go:89] found id: ""
	I0420 01:28:02.187498  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.187508  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:02.187515  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:02.187576  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:02.229206  142411 cri.go:89] found id: ""
	I0420 01:28:02.229233  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.229241  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:02.229247  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:02.229335  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:02.279425  142411 cri.go:89] found id: ""
	I0420 01:28:02.279464  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.279478  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:02.279488  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:02.279577  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:02.323033  142411 cri.go:89] found id: ""
	I0420 01:28:02.323066  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.323082  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:02.323090  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:02.323155  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:02.360121  142411 cri.go:89] found id: ""
	I0420 01:28:02.360158  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.360170  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:02.360178  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:02.360244  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:02.398756  142411 cri.go:89] found id: ""
	I0420 01:28:02.398786  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.398797  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:02.398804  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:02.398867  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:02.437982  142411 cri.go:89] found id: ""
	I0420 01:28:02.438010  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.438018  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:02.438028  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:02.438041  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:02.489396  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:02.489434  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:02.506764  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:02.506796  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:02.591894  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:02.591915  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:02.591929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:02.675241  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:02.675281  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:00.683139  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:02.684787  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:02.540829  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.041823  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:03.393076  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.396351  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.224296  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:05.238522  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:05.238593  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:05.278495  142411 cri.go:89] found id: ""
	I0420 01:28:05.278529  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.278540  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:05.278549  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:05.278621  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:05.318096  142411 cri.go:89] found id: ""
	I0420 01:28:05.318122  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.318130  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:05.318136  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:05.318196  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:05.358607  142411 cri.go:89] found id: ""
	I0420 01:28:05.358636  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.358653  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:05.358658  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:05.358749  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:05.417163  142411 cri.go:89] found id: ""
	I0420 01:28:05.417199  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.417211  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:05.417218  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:05.417284  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:05.468566  142411 cri.go:89] found id: ""
	I0420 01:28:05.468599  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.468610  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:05.468619  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:05.468691  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:05.514005  142411 cri.go:89] found id: ""
	I0420 01:28:05.514037  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.514047  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:05.514055  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:05.514112  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:05.554972  142411 cri.go:89] found id: ""
	I0420 01:28:05.555001  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.555012  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:05.555020  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:05.555083  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:05.596736  142411 cri.go:89] found id: ""
	I0420 01:28:05.596764  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.596773  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:05.596787  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:05.596800  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:05.649680  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:05.649719  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:05.667583  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:05.667614  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:05.743886  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:05.743922  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:05.743939  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:05.827827  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:05.827863  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:08.384615  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:05.181917  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.182902  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.541045  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:09.542114  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.892610  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:10.392899  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:08.401190  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:08.403071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:08.445453  142411 cri.go:89] found id: ""
	I0420 01:28:08.445486  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.445497  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:08.445505  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:08.445573  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:08.487598  142411 cri.go:89] found id: ""
	I0420 01:28:08.487636  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.487649  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:08.487657  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:08.487727  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:08.531416  142411 cri.go:89] found id: ""
	I0420 01:28:08.531445  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.531457  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:08.531465  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:08.531526  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:08.574964  142411 cri.go:89] found id: ""
	I0420 01:28:08.575000  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.575012  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:08.575020  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:08.575075  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:08.612644  142411 cri.go:89] found id: ""
	I0420 01:28:08.612679  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.612688  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:08.612695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:08.612748  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:08.651775  142411 cri.go:89] found id: ""
	I0420 01:28:08.651800  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.651811  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:08.651817  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:08.651869  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:08.692869  142411 cri.go:89] found id: ""
	I0420 01:28:08.692894  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.692902  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:08.692908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:08.692957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:08.731765  142411 cri.go:89] found id: ""
	I0420 01:28:08.731794  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.731805  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:08.731817  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:08.731836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:08.747401  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:08.747445  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:08.831069  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:08.831091  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:08.831110  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:08.919053  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:08.919095  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:08.965814  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:08.965854  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:11.518303  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:11.535213  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:11.535294  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:11.577182  142411 cri.go:89] found id: ""
	I0420 01:28:11.577214  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.577223  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:11.577229  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:11.577289  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:11.615023  142411 cri.go:89] found id: ""
	I0420 01:28:11.615055  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.615064  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:11.615070  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:11.615138  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:11.654062  142411 cri.go:89] found id: ""
	I0420 01:28:11.654089  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.654097  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:11.654104  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:11.654170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:11.700846  142411 cri.go:89] found id: ""
	I0420 01:28:11.700875  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.700885  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:11.700892  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:11.700966  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:11.743061  142411 cri.go:89] found id: ""
	I0420 01:28:11.743089  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.743100  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:11.743109  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:11.743175  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:11.783651  142411 cri.go:89] found id: ""
	I0420 01:28:11.783687  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.783698  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:11.783706  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:11.783781  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:11.827099  142411 cri.go:89] found id: ""
	I0420 01:28:11.827130  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.827139  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:11.827144  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:11.827197  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:11.867476  142411 cri.go:89] found id: ""
	I0420 01:28:11.867510  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.867523  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:11.867535  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:11.867554  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:11.920211  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:11.920246  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:11.937632  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:11.937670  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:12.014917  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:12.014940  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:12.014955  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:12.096549  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:12.096586  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:09.684447  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.183063  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.041220  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:14.540620  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.893441  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:15.408953  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:14.653783  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:14.667893  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:14.667955  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:14.710098  142411 cri.go:89] found id: ""
	I0420 01:28:14.710153  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.710164  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:14.710172  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:14.710240  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:14.750891  142411 cri.go:89] found id: ""
	I0420 01:28:14.750920  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.750929  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:14.750939  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:14.751010  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:14.794062  142411 cri.go:89] found id: ""
	I0420 01:28:14.794103  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.794127  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:14.794135  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:14.794204  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:14.834333  142411 cri.go:89] found id: ""
	I0420 01:28:14.834363  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.834375  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:14.834383  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:14.834446  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:14.874114  142411 cri.go:89] found id: ""
	I0420 01:28:14.874148  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.874160  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:14.874168  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:14.874238  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:14.912685  142411 cri.go:89] found id: ""
	I0420 01:28:14.912711  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.912720  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:14.912726  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:14.912787  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:14.954050  142411 cri.go:89] found id: ""
	I0420 01:28:14.954076  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.954083  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:14.954089  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:14.954150  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:14.992310  142411 cri.go:89] found id: ""
	I0420 01:28:14.992348  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.992357  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:14.992365  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:14.992388  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:15.047471  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:15.047512  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:15.065800  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:15.065842  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:15.146009  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:15.146037  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:15.146058  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:15.232920  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:15.232962  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:17.781215  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:17.797404  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:17.797466  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:17.840532  142411 cri.go:89] found id: ""
	I0420 01:28:17.840564  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.840573  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:17.840579  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:17.840636  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:17.881562  142411 cri.go:89] found id: ""
	I0420 01:28:17.881588  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.881596  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:17.881602  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:17.881651  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:17.935068  142411 cri.go:89] found id: ""
	I0420 01:28:17.935098  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.935108  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:17.935115  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:17.935177  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:17.980745  142411 cri.go:89] found id: ""
	I0420 01:28:17.980782  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.980795  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:17.980804  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:17.980880  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:18.051120  142411 cri.go:89] found id: ""
	I0420 01:28:18.051153  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.051164  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:18.051171  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:18.051235  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:18.091741  142411 cri.go:89] found id: ""
	I0420 01:28:18.091776  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.091788  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:18.091796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:18.091864  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:18.133438  142411 cri.go:89] found id: ""
	I0420 01:28:18.133472  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.133482  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:18.133488  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:18.133560  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:18.174624  142411 cri.go:89] found id: ""
	I0420 01:28:18.174665  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.174679  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:18.174694  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:18.174713  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:18.228519  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:18.228563  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:18.246452  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:18.246487  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:18.322051  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:18.322074  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:18.322088  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:14.684817  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:17.182405  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:16.541139  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:19.041191  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:17.895052  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:19.895901  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:18.404873  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:18.404904  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:20.950553  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:20.965081  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:20.965139  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:21.007198  142411 cri.go:89] found id: ""
	I0420 01:28:21.007243  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.007255  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:21.007263  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:21.007330  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:21.050991  142411 cri.go:89] found id: ""
	I0420 01:28:21.051019  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.051028  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:21.051034  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:21.051104  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:21.091953  142411 cri.go:89] found id: ""
	I0420 01:28:21.091986  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.091995  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:21.092001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:21.092085  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:21.134134  142411 cri.go:89] found id: ""
	I0420 01:28:21.134164  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.134174  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:21.134181  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:21.134251  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:21.173698  142411 cri.go:89] found id: ""
	I0420 01:28:21.173724  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.173731  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:21.173737  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:21.173801  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:21.221327  142411 cri.go:89] found id: ""
	I0420 01:28:21.221354  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.221362  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:21.221369  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:21.221428  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:21.262752  142411 cri.go:89] found id: ""
	I0420 01:28:21.262780  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.262791  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:21.262798  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:21.262851  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:21.303497  142411 cri.go:89] found id: ""
	I0420 01:28:21.303524  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.303535  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:21.303547  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:21.303563  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:21.358231  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:21.358265  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:21.373723  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:21.373753  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:21.465016  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:21.465044  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:21.465061  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:21.552087  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:21.552117  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:19.683617  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:22.182720  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:21.540588  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.039211  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:22.393170  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.396378  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.099938  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:24.116967  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:24.117045  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:24.159458  142411 cri.go:89] found id: ""
	I0420 01:28:24.159491  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.159501  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:24.159508  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:24.159574  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:24.206028  142411 cri.go:89] found id: ""
	I0420 01:28:24.206054  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.206065  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:24.206072  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:24.206137  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:24.248047  142411 cri.go:89] found id: ""
	I0420 01:28:24.248088  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.248101  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:24.248109  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:24.248176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:24.287867  142411 cri.go:89] found id: ""
	I0420 01:28:24.287898  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.287909  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:24.287917  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:24.287995  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:24.329399  142411 cri.go:89] found id: ""
	I0420 01:28:24.329433  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.329444  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:24.329452  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:24.329519  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:24.367846  142411 cri.go:89] found id: ""
	I0420 01:28:24.367871  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.367882  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:24.367889  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:24.367960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:24.414245  142411 cri.go:89] found id: ""
	I0420 01:28:24.414272  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.414283  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:24.414291  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:24.414354  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:24.453268  142411 cri.go:89] found id: ""
	I0420 01:28:24.453302  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.453331  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:24.453344  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:24.453366  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:24.514501  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:24.514546  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:24.529551  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:24.529591  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:24.613734  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:24.613757  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:24.613775  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:24.693804  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:24.693843  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:27.238443  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:27.254172  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:27.254235  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:27.297048  142411 cri.go:89] found id: ""
	I0420 01:28:27.297101  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.297111  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:27.297119  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:27.297181  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:27.340145  142411 cri.go:89] found id: ""
	I0420 01:28:27.340171  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.340181  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:27.340189  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:27.340316  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:27.383047  142411 cri.go:89] found id: ""
	I0420 01:28:27.383077  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.383089  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:27.383096  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:27.383169  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:27.428088  142411 cri.go:89] found id: ""
	I0420 01:28:27.428122  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.428134  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:27.428142  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:27.428206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:27.468257  142411 cri.go:89] found id: ""
	I0420 01:28:27.468300  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.468310  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:27.468317  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:27.468389  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:27.508834  142411 cri.go:89] found id: ""
	I0420 01:28:27.508873  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.508885  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:27.508892  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:27.508953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:27.548853  142411 cri.go:89] found id: ""
	I0420 01:28:27.548893  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.548901  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:27.548908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:27.548956  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:27.587841  142411 cri.go:89] found id: ""
	I0420 01:28:27.587875  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.587886  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:27.587899  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:27.587917  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:27.667848  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:27.667888  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:27.714820  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:27.714856  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:27.766337  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:27.766381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:27.782585  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:27.782627  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:27.856172  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:24.184768  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.683097  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.040531  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:28.040802  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:30.542386  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.893091  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:29.393546  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:30.356809  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:30.372449  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:30.372529  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:30.422164  142411 cri.go:89] found id: ""
	I0420 01:28:30.422198  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.422209  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:30.422218  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:30.422283  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:30.460367  142411 cri.go:89] found id: ""
	I0420 01:28:30.460395  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.460404  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:30.460411  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:30.460498  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:30.508423  142411 cri.go:89] found id: ""
	I0420 01:28:30.508460  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.508471  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:30.508479  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:30.508546  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:30.553124  142411 cri.go:89] found id: ""
	I0420 01:28:30.553152  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.553161  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:30.553167  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:30.553225  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:30.601866  142411 cri.go:89] found id: ""
	I0420 01:28:30.601908  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.601919  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:30.601939  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:30.602014  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:30.645413  142411 cri.go:89] found id: ""
	I0420 01:28:30.645446  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.645457  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:30.645467  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:30.645539  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:30.690955  142411 cri.go:89] found id: ""
	I0420 01:28:30.690988  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.690997  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:30.691006  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:30.691077  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:30.732146  142411 cri.go:89] found id: ""
	I0420 01:28:30.732186  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.732197  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:30.732209  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:30.732228  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:30.786890  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:30.786928  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:30.802887  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:30.802920  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:30.884422  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:30.884447  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:30.884461  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:30.967504  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:30.967540  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:29.183645  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:31.683218  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.684335  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.044031  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:35.540100  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:31.897363  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:34.392658  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.515720  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:33.531895  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:33.531953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:33.574626  142411 cri.go:89] found id: ""
	I0420 01:28:33.574668  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.574682  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:33.574690  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:33.574757  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:33.620527  142411 cri.go:89] found id: ""
	I0420 01:28:33.620553  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.620562  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:33.620568  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:33.620630  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:33.659685  142411 cri.go:89] found id: ""
	I0420 01:28:33.659711  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.659719  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:33.659724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:33.659773  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:33.699390  142411 cri.go:89] found id: ""
	I0420 01:28:33.699414  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.699422  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:33.699427  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:33.699485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:33.743819  142411 cri.go:89] found id: ""
	I0420 01:28:33.743844  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.743852  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:33.743858  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:33.743907  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:33.788416  142411 cri.go:89] found id: ""
	I0420 01:28:33.788442  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.788450  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:33.788456  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:33.788514  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:33.834105  142411 cri.go:89] found id: ""
	I0420 01:28:33.834129  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.834138  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:33.834144  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:33.834206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:33.884118  142411 cri.go:89] found id: ""
	I0420 01:28:33.884152  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.884164  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:33.884176  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:33.884193  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:33.940493  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:33.940525  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:33.954800  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:33.954829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:34.030788  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:34.030812  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:34.030829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:34.119533  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:34.119574  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:36.667132  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:36.684253  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:36.684334  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:36.723598  142411 cri.go:89] found id: ""
	I0420 01:28:36.723629  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.723641  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:36.723649  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:36.723718  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:36.761563  142411 cri.go:89] found id: ""
	I0420 01:28:36.761594  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.761606  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:36.761614  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:36.761679  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:36.803553  142411 cri.go:89] found id: ""
	I0420 01:28:36.803590  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.803603  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:36.803611  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:36.803674  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:36.840368  142411 cri.go:89] found id: ""
	I0420 01:28:36.840407  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.840421  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:36.840430  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:36.840497  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:36.879689  142411 cri.go:89] found id: ""
	I0420 01:28:36.879724  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.879735  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:36.879743  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:36.879807  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:36.920757  142411 cri.go:89] found id: ""
	I0420 01:28:36.920785  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.920796  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:36.920809  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:36.920871  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:36.957522  142411 cri.go:89] found id: ""
	I0420 01:28:36.957548  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.957556  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:36.957562  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:36.957624  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:36.997358  142411 cri.go:89] found id: ""
	I0420 01:28:36.997390  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.997400  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:36.997409  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:36.997422  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:37.055063  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:37.055105  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:37.070691  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:37.070720  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:37.150114  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:37.150140  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:37.150152  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:37.228676  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:37.228711  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:36.182514  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.183398  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.040622  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:40.539486  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:36.395217  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.893457  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:40.894381  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:39.776620  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:39.792201  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:39.792268  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:39.831544  142411 cri.go:89] found id: ""
	I0420 01:28:39.831568  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.831576  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:39.831588  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:39.831652  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:39.869458  142411 cri.go:89] found id: ""
	I0420 01:28:39.869488  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.869496  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:39.869503  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:39.869564  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:39.911588  142411 cri.go:89] found id: ""
	I0420 01:28:39.911615  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.911626  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:39.911633  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:39.911703  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:39.952458  142411 cri.go:89] found id: ""
	I0420 01:28:39.952489  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.952505  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:39.952513  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:39.952580  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:39.992988  142411 cri.go:89] found id: ""
	I0420 01:28:39.993016  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.993023  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:39.993029  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:39.993117  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:40.038306  142411 cri.go:89] found id: ""
	I0420 01:28:40.038348  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.038359  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:40.038367  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:40.038432  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:40.082185  142411 cri.go:89] found id: ""
	I0420 01:28:40.082219  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.082230  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:40.082238  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:40.082332  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:40.120346  142411 cri.go:89] found id: ""
	I0420 01:28:40.120373  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.120382  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:40.120391  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:40.120405  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:40.173735  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:40.173769  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:40.191808  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:40.191844  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:40.271429  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:40.271456  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:40.271473  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:40.361519  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:40.361558  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:42.938354  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:42.953088  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:42.953167  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:42.992539  142411 cri.go:89] found id: ""
	I0420 01:28:42.992564  142411 logs.go:276] 0 containers: []
	W0420 01:28:42.992571  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:42.992577  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:42.992637  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:43.032017  142411 cri.go:89] found id: ""
	I0420 01:28:43.032059  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.032074  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:43.032082  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:43.032142  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:43.077229  142411 cri.go:89] found id: ""
	I0420 01:28:43.077258  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.077266  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:43.077272  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:43.077342  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:43.117107  142411 cri.go:89] found id: ""
	I0420 01:28:43.117128  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.117139  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:43.117145  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:43.117206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:43.156262  142411 cri.go:89] found id: ""
	I0420 01:28:43.156297  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.156310  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:43.156317  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:43.156384  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:43.195897  142411 cri.go:89] found id: ""
	I0420 01:28:43.195927  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.195935  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:43.195942  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:43.195990  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:43.230468  142411 cri.go:89] found id: ""
	I0420 01:28:43.230498  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.230513  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:43.230522  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:43.230586  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:43.271980  142411 cri.go:89] found id: ""
	I0420 01:28:43.272009  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.272023  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:43.272035  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:43.272050  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:43.331606  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:43.331641  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:43.348411  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:43.348437  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 01:28:40.682973  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:43.182655  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:42.540341  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:45.039729  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:43.393377  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:45.893276  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	W0420 01:28:43.428628  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:43.428654  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:43.428675  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:43.511471  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:43.511506  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:46.056166  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:46.071677  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:46.071744  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:46.110710  142411 cri.go:89] found id: ""
	I0420 01:28:46.110740  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.110753  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:46.110761  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:46.110825  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:46.170680  142411 cri.go:89] found id: ""
	I0420 01:28:46.170712  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.170724  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:46.170731  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:46.170794  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:46.216387  142411 cri.go:89] found id: ""
	I0420 01:28:46.216413  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.216421  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:46.216429  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:46.216485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:46.258641  142411 cri.go:89] found id: ""
	I0420 01:28:46.258674  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.258685  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:46.258694  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:46.258755  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:46.296359  142411 cri.go:89] found id: ""
	I0420 01:28:46.296395  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.296407  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:46.296416  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:46.296480  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:46.335194  142411 cri.go:89] found id: ""
	I0420 01:28:46.335223  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.335238  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:46.335247  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:46.335300  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:46.373748  142411 cri.go:89] found id: ""
	I0420 01:28:46.373777  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.373789  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:46.373796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:46.373860  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:46.416960  142411 cri.go:89] found id: ""
	I0420 01:28:46.416987  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.416995  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:46.417005  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:46.417017  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:46.497542  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:46.497582  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:46.548086  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:46.548136  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:46.607354  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:46.607390  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:46.624379  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:46.624415  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:46.707425  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:45.682511  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.682752  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.046102  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:49.540014  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.895805  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:50.393001  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:49.208459  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:49.223081  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:49.223146  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:49.258688  142411 cri.go:89] found id: ""
	I0420 01:28:49.258718  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.258728  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:49.258734  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:49.258791  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:49.296817  142411 cri.go:89] found id: ""
	I0420 01:28:49.296859  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.296870  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:49.296878  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:49.296941  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:49.337821  142411 cri.go:89] found id: ""
	I0420 01:28:49.337853  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.337863  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:49.337870  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:49.337940  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:49.381360  142411 cri.go:89] found id: ""
	I0420 01:28:49.381384  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.381392  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:49.381397  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:49.381463  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:49.420099  142411 cri.go:89] found id: ""
	I0420 01:28:49.420143  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.420154  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:49.420162  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:49.420223  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:49.459810  142411 cri.go:89] found id: ""
	I0420 01:28:49.459843  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.459850  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:49.459859  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:49.459911  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:49.499776  142411 cri.go:89] found id: ""
	I0420 01:28:49.499808  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.499820  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:49.499828  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:49.499894  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:49.536115  142411 cri.go:89] found id: ""
	I0420 01:28:49.536147  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.536158  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:49.536169  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:49.536190  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:49.594665  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:49.594701  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:49.611896  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:49.611929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:49.689667  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:49.689685  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:49.689697  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:49.769061  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:49.769106  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:52.319299  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:52.336861  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:52.336934  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:52.380690  142411 cri.go:89] found id: ""
	I0420 01:28:52.380717  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.380725  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:52.380731  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:52.380781  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:52.429798  142411 cri.go:89] found id: ""
	I0420 01:28:52.429831  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.429843  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:52.429851  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:52.429915  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:52.474087  142411 cri.go:89] found id: ""
	I0420 01:28:52.474120  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.474130  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:52.474139  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:52.474204  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:52.514739  142411 cri.go:89] found id: ""
	I0420 01:28:52.514776  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.514789  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:52.514796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:52.514852  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:52.562100  142411 cri.go:89] found id: ""
	I0420 01:28:52.562195  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.562228  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:52.562236  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:52.562324  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:52.623266  142411 cri.go:89] found id: ""
	I0420 01:28:52.623301  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.623313  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:52.623321  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:52.623386  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:52.667788  142411 cri.go:89] found id: ""
	I0420 01:28:52.667818  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.667828  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:52.667838  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:52.667902  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:52.724607  142411 cri.go:89] found id: ""
	I0420 01:28:52.724636  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.724645  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:52.724654  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:52.724666  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:52.774798  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:52.774836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:52.833949  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:52.833989  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:52.851757  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:52.851787  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:52.939092  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:52.939119  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:52.939136  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:49.684112  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:52.182596  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:51.540918  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:54.039528  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:52.393913  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:54.892043  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:55.525807  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:55.540481  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:55.540557  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:55.584415  142411 cri.go:89] found id: ""
	I0420 01:28:55.584447  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.584458  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:55.584466  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:55.584538  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:55.623920  142411 cri.go:89] found id: ""
	I0420 01:28:55.623955  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.623965  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:55.623973  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:55.624037  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:55.667768  142411 cri.go:89] found id: ""
	I0420 01:28:55.667802  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.667810  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:55.667816  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:55.667889  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:55.708466  142411 cri.go:89] found id: ""
	I0420 01:28:55.708502  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.708513  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:55.708520  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:55.708600  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:55.748797  142411 cri.go:89] found id: ""
	I0420 01:28:55.748838  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.748849  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:55.748857  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:55.748919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:55.791714  142411 cri.go:89] found id: ""
	I0420 01:28:55.791743  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.791752  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:55.791761  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:55.791832  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:55.833836  142411 cri.go:89] found id: ""
	I0420 01:28:55.833862  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.833872  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:55.833879  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:55.833942  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:55.877425  142411 cri.go:89] found id: ""
	I0420 01:28:55.877462  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.877472  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:55.877484  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:55.877501  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:55.933237  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:55.933280  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:55.949507  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:55.949534  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:56.025596  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:56.025624  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:56.025641  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:56.105403  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:56.105439  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:54.683664  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.684401  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.040380  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.040834  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:00.040878  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.893067  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.894882  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.653368  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:58.669367  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:58.669429  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:58.712457  142411 cri.go:89] found id: ""
	I0420 01:28:58.712490  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.712501  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:58.712508  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:58.712574  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:58.750246  142411 cri.go:89] found id: ""
	I0420 01:28:58.750273  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.750281  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:58.750287  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:58.750351  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:58.793486  142411 cri.go:89] found id: ""
	I0420 01:28:58.793514  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.793522  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:58.793529  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:58.793595  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:58.839413  142411 cri.go:89] found id: ""
	I0420 01:28:58.839448  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.839461  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:58.839469  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:58.839537  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:58.881385  142411 cri.go:89] found id: ""
	I0420 01:28:58.881418  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.881430  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:58.881438  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:58.881509  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:58.923900  142411 cri.go:89] found id: ""
	I0420 01:28:58.923945  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.923965  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:58.923975  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:58.924038  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:58.962795  142411 cri.go:89] found id: ""
	I0420 01:28:58.962836  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.962848  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:58.962856  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:58.962919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:59.006309  142411 cri.go:89] found id: ""
	I0420 01:28:59.006341  142411 logs.go:276] 0 containers: []
	W0420 01:28:59.006350  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:59.006360  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:59.006372  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:59.062778  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:59.062819  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:59.078600  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:59.078630  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:59.159340  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:59.159361  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:59.159376  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:59.247257  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:59.247307  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:01.792687  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:01.808507  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:01.808588  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:01.851642  142411 cri.go:89] found id: ""
	I0420 01:29:01.851680  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.851691  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:01.851699  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:01.851765  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:01.891516  142411 cri.go:89] found id: ""
	I0420 01:29:01.891549  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.891560  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:01.891568  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:01.891640  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:01.934353  142411 cri.go:89] found id: ""
	I0420 01:29:01.934390  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.934402  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:01.934410  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:01.934479  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:01.972552  142411 cri.go:89] found id: ""
	I0420 01:29:01.972587  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.972599  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:01.972607  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:01.972711  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:02.012316  142411 cri.go:89] found id: ""
	I0420 01:29:02.012348  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.012360  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:02.012368  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:02.012423  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:02.056951  142411 cri.go:89] found id: ""
	I0420 01:29:02.056984  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.056994  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:02.057001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:02.057164  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:02.104061  142411 cri.go:89] found id: ""
	I0420 01:29:02.104091  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.104102  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:02.104110  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:02.104163  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:02.144085  142411 cri.go:89] found id: ""
	I0420 01:29:02.144114  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.144125  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:02.144137  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:02.144160  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:02.216560  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:02.216585  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:02.216598  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:02.307178  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:02.307222  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:02.349769  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:02.349798  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:02.401141  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:02.401176  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:59.185384  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:01.684462  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:03.685188  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:02.041060  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:04.540616  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:01.393943  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:03.894095  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:04.917513  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:04.934187  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:04.934266  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:04.970258  142411 cri.go:89] found id: ""
	I0420 01:29:04.970289  142411 logs.go:276] 0 containers: []
	W0420 01:29:04.970298  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:04.970304  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:04.970359  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:05.012853  142411 cri.go:89] found id: ""
	I0420 01:29:05.012883  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.012893  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:05.012899  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:05.012960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:05.054793  142411 cri.go:89] found id: ""
	I0420 01:29:05.054822  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.054833  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:05.054842  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:05.054910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:05.094637  142411 cri.go:89] found id: ""
	I0420 01:29:05.094674  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.094684  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:05.094701  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:05.094770  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:05.134874  142411 cri.go:89] found id: ""
	I0420 01:29:05.134903  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.134912  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:05.134918  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:05.134973  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:05.175637  142411 cri.go:89] found id: ""
	I0420 01:29:05.175668  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.175679  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:05.175687  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:05.175752  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:05.217809  142411 cri.go:89] found id: ""
	I0420 01:29:05.217847  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.217860  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:05.217867  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:05.217933  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:05.266884  142411 cri.go:89] found id: ""
	I0420 01:29:05.266917  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.266930  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:05.266941  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:05.266958  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:05.323765  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:05.323818  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:05.338524  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:05.338553  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:05.419860  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:05.419889  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:05.419906  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:05.506268  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:05.506311  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:08.055690  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:08.072692  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:08.072758  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:08.116247  142411 cri.go:89] found id: ""
	I0420 01:29:08.116287  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.116296  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:08.116304  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:08.116369  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:08.163152  142411 cri.go:89] found id: ""
	I0420 01:29:08.163177  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.163185  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:08.163190  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:08.163246  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:08.207330  142411 cri.go:89] found id: ""
	I0420 01:29:08.207357  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.207365  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:08.207371  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:08.207422  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:08.249833  142411 cri.go:89] found id: ""
	I0420 01:29:08.249864  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.249873  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:08.249879  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:08.249941  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:08.290834  142411 cri.go:89] found id: ""
	I0420 01:29:08.290867  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.290876  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:08.290883  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:08.290957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:08.333767  142411 cri.go:89] found id: ""
	I0420 01:29:08.333799  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.333809  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:08.333816  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:08.333888  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:08.381431  142411 cri.go:89] found id: ""
	I0420 01:29:08.381459  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.381468  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:08.381474  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:08.381532  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:06.183719  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.184829  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:06.544179  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:09.039956  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:06.394434  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.893184  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:10.897462  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.423702  142411 cri.go:89] found id: ""
	I0420 01:29:08.423727  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.423739  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:08.423751  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:08.423767  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:08.468422  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:08.468460  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:08.524091  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:08.524125  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:08.540294  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:08.540323  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:08.622439  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:08.622472  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:08.622488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:11.208472  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:11.225412  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:11.225479  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:11.273723  142411 cri.go:89] found id: ""
	I0420 01:29:11.273755  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.273767  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:11.273775  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:11.273840  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:11.316083  142411 cri.go:89] found id: ""
	I0420 01:29:11.316118  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.316130  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:11.316137  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:11.316203  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:11.355632  142411 cri.go:89] found id: ""
	I0420 01:29:11.355659  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.355668  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:11.355674  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:11.355734  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:11.397277  142411 cri.go:89] found id: ""
	I0420 01:29:11.397305  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.397327  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:11.397335  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:11.397399  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:11.439333  142411 cri.go:89] found id: ""
	I0420 01:29:11.439357  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.439366  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:11.439372  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:11.439433  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:11.477044  142411 cri.go:89] found id: ""
	I0420 01:29:11.477072  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.477079  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:11.477086  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:11.477142  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:11.516150  142411 cri.go:89] found id: ""
	I0420 01:29:11.516184  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.516196  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:11.516204  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:11.516274  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:11.557272  142411 cri.go:89] found id: ""
	I0420 01:29:11.557303  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.557331  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:11.557344  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:11.557366  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:11.652272  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:11.652319  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:11.700469  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:11.700504  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:11.756674  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:11.756711  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:11.772377  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:11.772407  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:11.851387  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:10.682669  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:12.684335  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:11.041282  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:13.541986  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:13.393346  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:15.394909  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:14.352257  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:14.367635  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:14.367714  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:14.408757  142411 cri.go:89] found id: ""
	I0420 01:29:14.408779  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.408788  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:14.408794  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:14.408843  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:14.455123  142411 cri.go:89] found id: ""
	I0420 01:29:14.455150  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.455159  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:14.455165  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:14.455239  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:14.499546  142411 cri.go:89] found id: ""
	I0420 01:29:14.499573  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.499581  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:14.499587  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:14.499635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:14.541811  142411 cri.go:89] found id: ""
	I0420 01:29:14.541841  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.541851  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:14.541859  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:14.541923  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:14.586965  142411 cri.go:89] found id: ""
	I0420 01:29:14.586990  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.587001  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:14.587008  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:14.587071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:14.625251  142411 cri.go:89] found id: ""
	I0420 01:29:14.625279  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.625288  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:14.625294  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:14.625377  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:14.665038  142411 cri.go:89] found id: ""
	I0420 01:29:14.665067  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.665079  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:14.665086  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:14.665157  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:14.706931  142411 cri.go:89] found id: ""
	I0420 01:29:14.706964  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.706978  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:14.706992  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:14.707044  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:14.761681  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:14.761717  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:14.776324  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:14.776350  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:14.856707  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:14.856727  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:14.856738  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:14.944019  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:14.944064  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:17.489112  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:17.507594  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:17.507660  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:17.556091  142411 cri.go:89] found id: ""
	I0420 01:29:17.556122  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.556132  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:17.556140  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:17.556205  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:17.600016  142411 cri.go:89] found id: ""
	I0420 01:29:17.600072  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.600086  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:17.600107  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:17.600171  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:17.643074  142411 cri.go:89] found id: ""
	I0420 01:29:17.643106  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.643118  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:17.643125  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:17.643190  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:17.684798  142411 cri.go:89] found id: ""
	I0420 01:29:17.684827  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.684838  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:17.684845  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:17.684910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:17.725451  142411 cri.go:89] found id: ""
	I0420 01:29:17.725481  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.725494  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:17.725503  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:17.725575  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:17.765918  142411 cri.go:89] found id: ""
	I0420 01:29:17.765944  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.765952  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:17.765959  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:17.766023  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:17.806011  142411 cri.go:89] found id: ""
	I0420 01:29:17.806038  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.806049  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:17.806056  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:17.806122  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:17.848409  142411 cri.go:89] found id: ""
	I0420 01:29:17.848441  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.848453  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:17.848465  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:17.848488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:17.903854  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:17.903900  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:17.919156  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:17.919191  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:18.008073  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:18.008115  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:18.008133  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:18.095887  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:18.095929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:14.687917  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:17.182326  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:16.039159  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:18.040487  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.540830  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:17.893270  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.392563  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.646919  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:20.664559  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:20.664635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:20.714440  142411 cri.go:89] found id: ""
	I0420 01:29:20.714472  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.714481  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:20.714487  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:20.714543  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:20.755249  142411 cri.go:89] found id: ""
	I0420 01:29:20.755276  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.755287  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:20.755294  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:20.755355  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:20.795744  142411 cri.go:89] found id: ""
	I0420 01:29:20.795777  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.795786  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:20.795797  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:20.795864  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:20.838083  142411 cri.go:89] found id: ""
	I0420 01:29:20.838111  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.838120  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:20.838128  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:20.838193  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:20.880198  142411 cri.go:89] found id: ""
	I0420 01:29:20.880227  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.880238  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:20.880245  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:20.880312  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:20.920496  142411 cri.go:89] found id: ""
	I0420 01:29:20.920522  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.920530  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:20.920536  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:20.920618  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:20.960137  142411 cri.go:89] found id: ""
	I0420 01:29:20.960170  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.960180  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:20.960186  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:20.960251  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:20.999583  142411 cri.go:89] found id: ""
	I0420 01:29:20.999624  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.999637  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:20.999649  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:20.999665  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:21.077439  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:21.077476  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:21.121104  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:21.121148  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:21.173871  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:21.173909  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:21.189767  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:21.189795  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:21.264715  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:19.682554  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:21.682995  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:22.543452  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:25.040875  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:22.393626  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:24.894279  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:23.765605  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:23.782250  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:23.782334  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:23.827248  142411 cri.go:89] found id: ""
	I0420 01:29:23.827277  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.827285  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:23.827291  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:23.827349  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:23.867610  142411 cri.go:89] found id: ""
	I0420 01:29:23.867636  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.867645  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:23.867651  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:23.867712  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:23.906244  142411 cri.go:89] found id: ""
	I0420 01:29:23.906271  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.906278  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:23.906283  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:23.906343  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:23.952256  142411 cri.go:89] found id: ""
	I0420 01:29:23.952288  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.952306  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:23.952314  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:23.952378  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:23.992843  142411 cri.go:89] found id: ""
	I0420 01:29:23.992879  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.992888  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:23.992896  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:23.992959  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:24.036460  142411 cri.go:89] found id: ""
	I0420 01:29:24.036493  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.036504  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:24.036512  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:24.036582  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:24.075910  142411 cri.go:89] found id: ""
	I0420 01:29:24.075944  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.075955  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:24.075962  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:24.076033  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:24.122638  142411 cri.go:89] found id: ""
	I0420 01:29:24.122676  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.122688  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:24.122698  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:24.122717  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:24.138022  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:24.138061  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:24.220977  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:24.220998  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:24.221012  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:24.302928  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:24.302972  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:24.351237  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:24.351277  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:26.910354  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:26.926815  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:26.926900  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:26.966123  142411 cri.go:89] found id: ""
	I0420 01:29:26.966155  142411 logs.go:276] 0 containers: []
	W0420 01:29:26.966165  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:26.966172  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:26.966246  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:27.011679  142411 cri.go:89] found id: ""
	I0420 01:29:27.011714  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.011727  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:27.011735  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:27.011806  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:27.052116  142411 cri.go:89] found id: ""
	I0420 01:29:27.052141  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.052148  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:27.052155  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:27.052202  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:27.090375  142411 cri.go:89] found id: ""
	I0420 01:29:27.090404  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.090413  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:27.090419  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:27.090476  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:27.131911  142411 cri.go:89] found id: ""
	I0420 01:29:27.131946  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.131957  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:27.131965  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:27.132033  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:27.176663  142411 cri.go:89] found id: ""
	I0420 01:29:27.176696  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.176714  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:27.176723  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:27.176788  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:27.217806  142411 cri.go:89] found id: ""
	I0420 01:29:27.217836  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.217846  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:27.217853  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:27.217917  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:27.253956  142411 cri.go:89] found id: ""
	I0420 01:29:27.253981  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.253989  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:27.253998  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:27.254014  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:27.298225  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:27.298264  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:27.351213  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:27.351259  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:27.366352  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:27.366388  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:27.466716  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:27.466742  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:27.466770  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:24.184743  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:26.681862  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:28.683193  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:27.042377  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:29.539413  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:27.395660  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:29.893947  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:30.050528  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:30.065697  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:30.065769  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:30.104643  142411 cri.go:89] found id: ""
	I0420 01:29:30.104675  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.104686  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:30.104694  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:30.104753  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:30.143864  142411 cri.go:89] found id: ""
	I0420 01:29:30.143892  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.143903  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:30.143910  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:30.143976  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:30.187925  142411 cri.go:89] found id: ""
	I0420 01:29:30.187954  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.187964  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:30.187972  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:30.188035  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:30.227968  142411 cri.go:89] found id: ""
	I0420 01:29:30.227995  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.228003  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:30.228009  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:30.228059  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:30.269550  142411 cri.go:89] found id: ""
	I0420 01:29:30.269584  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.269596  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:30.269604  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:30.269672  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:30.311777  142411 cri.go:89] found id: ""
	I0420 01:29:30.311810  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.311819  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:30.311827  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:30.311878  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:30.353569  142411 cri.go:89] found id: ""
	I0420 01:29:30.353601  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.353610  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:30.353617  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:30.353683  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:30.395003  142411 cri.go:89] found id: ""
	I0420 01:29:30.395032  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.395043  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:30.395054  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:30.395066  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:30.455495  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:30.455536  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:30.473749  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:30.473778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:30.555370  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:30.555397  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:30.555417  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:30.637079  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:30.637124  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:33.188917  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:33.203689  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:33.203757  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:33.246796  142411 cri.go:89] found id: ""
	I0420 01:29:33.246828  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.246840  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:33.246848  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:33.246911  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:33.284667  142411 cri.go:89] found id: ""
	I0420 01:29:33.284700  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.284712  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:33.284720  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:33.284782  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:33.328653  142411 cri.go:89] found id: ""
	I0420 01:29:33.328688  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.328701  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:33.328709  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:33.328777  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:33.369081  142411 cri.go:89] found id: ""
	I0420 01:29:33.369107  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.369121  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:33.369130  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:33.369180  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:30.684861  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:32.689885  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:31.547492  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:34.040445  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:31.894902  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:34.392071  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:33.414282  142411 cri.go:89] found id: ""
	I0420 01:29:33.414313  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.414322  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:33.414327  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:33.414411  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:33.457086  142411 cri.go:89] found id: ""
	I0420 01:29:33.457112  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.457119  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:33.457126  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:33.457176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:33.498686  142411 cri.go:89] found id: ""
	I0420 01:29:33.498716  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.498729  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:33.498738  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:33.498808  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:33.538872  142411 cri.go:89] found id: ""
	I0420 01:29:33.538907  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.538920  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:33.538932  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:33.538959  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:33.592586  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:33.592631  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:33.609200  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:33.609226  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:33.690795  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:33.690820  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:33.690836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:33.776092  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:33.776131  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:36.331256  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:36.348813  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:36.348892  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:36.397503  142411 cri.go:89] found id: ""
	I0420 01:29:36.397527  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.397534  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:36.397540  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:36.397603  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:36.439638  142411 cri.go:89] found id: ""
	I0420 01:29:36.439667  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.439675  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:36.439685  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:36.439761  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:36.477155  142411 cri.go:89] found id: ""
	I0420 01:29:36.477182  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.477194  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:36.477201  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:36.477259  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:36.533326  142411 cri.go:89] found id: ""
	I0420 01:29:36.533360  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.533373  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:36.533381  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:36.533446  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:36.573056  142411 cri.go:89] found id: ""
	I0420 01:29:36.573093  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.573107  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:36.573114  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:36.573177  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:36.611901  142411 cri.go:89] found id: ""
	I0420 01:29:36.611937  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.611949  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:36.611957  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:36.612017  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:36.656780  142411 cri.go:89] found id: ""
	I0420 01:29:36.656810  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.656823  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:36.656830  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:36.656899  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:36.699872  142411 cri.go:89] found id: ""
	I0420 01:29:36.699906  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.699916  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:36.699928  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:36.699943  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:36.758859  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:36.758895  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:36.775108  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:36.775145  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:36.858001  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:36.858027  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:36.858044  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:36.936114  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:36.936154  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:35.182481  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:37.182529  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:36.041125  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:38.043465  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:40.540023  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:36.395316  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:38.894062  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:40.894416  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:39.487167  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:39.502929  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:39.502995  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:39.547338  142411 cri.go:89] found id: ""
	I0420 01:29:39.547363  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.547371  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:39.547377  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:39.547430  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:39.608684  142411 cri.go:89] found id: ""
	I0420 01:29:39.608714  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.608722  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:39.608728  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:39.608793  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:39.679248  142411 cri.go:89] found id: ""
	I0420 01:29:39.679281  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.679292  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:39.679300  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:39.679361  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:39.725226  142411 cri.go:89] found id: ""
	I0420 01:29:39.725257  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.725270  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:39.725278  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:39.725363  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:39.767653  142411 cri.go:89] found id: ""
	I0420 01:29:39.767681  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.767690  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:39.767697  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:39.767760  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:39.807848  142411 cri.go:89] found id: ""
	I0420 01:29:39.807885  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.807893  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:39.807900  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:39.807968  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:39.847171  142411 cri.go:89] found id: ""
	I0420 01:29:39.847201  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.847212  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:39.847219  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:39.847284  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:39.884959  142411 cri.go:89] found id: ""
	I0420 01:29:39.884996  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.885007  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:39.885034  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:39.885050  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:39.959245  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:39.959269  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:39.959286  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:40.041394  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:40.041436  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:40.083125  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:40.083171  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:40.139902  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:40.139957  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:42.657038  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:42.673303  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:42.673407  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:42.717081  142411 cri.go:89] found id: ""
	I0420 01:29:42.717106  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.717114  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:42.717120  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:42.717170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:42.762322  142411 cri.go:89] found id: ""
	I0420 01:29:42.762357  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.762367  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:42.762375  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:42.762442  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:42.805059  142411 cri.go:89] found id: ""
	I0420 01:29:42.805112  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.805122  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:42.805131  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:42.805201  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:42.848539  142411 cri.go:89] found id: ""
	I0420 01:29:42.848568  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.848580  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:42.848587  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:42.848679  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:42.887915  142411 cri.go:89] found id: ""
	I0420 01:29:42.887949  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.887960  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:42.887967  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:42.888032  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:42.938832  142411 cri.go:89] found id: ""
	I0420 01:29:42.938867  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.938878  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:42.938888  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:42.938957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:42.982376  142411 cri.go:89] found id: ""
	I0420 01:29:42.982402  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.982409  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:42.982415  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:42.982477  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:43.023264  142411 cri.go:89] found id: ""
	I0420 01:29:43.023293  142411 logs.go:276] 0 containers: []
	W0420 01:29:43.023301  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:43.023313  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:43.023326  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:43.079673  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:43.079714  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:43.094753  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:43.094786  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:43.180113  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:43.180149  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:43.180177  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:43.259830  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:43.259872  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:39.182568  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:41.186805  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:43.683131  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:42.540687  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.039857  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:43.392948  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.394081  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.802515  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:45.816908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:45.816965  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:45.861091  142411 cri.go:89] found id: ""
	I0420 01:29:45.861123  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.861132  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:45.861138  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:45.861224  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:45.901677  142411 cri.go:89] found id: ""
	I0420 01:29:45.901702  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.901710  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:45.901716  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:45.901767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:45.938301  142411 cri.go:89] found id: ""
	I0420 01:29:45.938325  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.938334  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:45.938339  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:45.938393  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:45.978432  142411 cri.go:89] found id: ""
	I0420 01:29:45.978460  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.978473  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:45.978479  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:45.978537  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:46.019410  142411 cri.go:89] found id: ""
	I0420 01:29:46.019446  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.019455  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:46.019461  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:46.019524  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:46.071002  142411 cri.go:89] found id: ""
	I0420 01:29:46.071032  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.071041  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:46.071052  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:46.071124  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:46.110362  142411 cri.go:89] found id: ""
	I0420 01:29:46.110391  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.110402  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:46.110409  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:46.110477  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:46.152276  142411 cri.go:89] found id: ""
	I0420 01:29:46.152311  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.152322  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:46.152334  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:46.152351  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:46.205121  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:46.205159  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:46.221808  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:46.221842  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:46.300394  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:46.300418  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:46.300434  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:46.391961  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:46.392002  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:45.684038  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:48.176081  141927 pod_ready.go:81] duration metric: took 4m0.00056563s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" ...
	E0420 01:29:48.176112  141927 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:29:48.176130  141927 pod_ready.go:38] duration metric: took 4m7.024291569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:29:48.176166  141927 kubeadm.go:591] duration metric: took 4m16.819079549s to restartPrimaryControlPlane
	W0420 01:29:48.176256  141927 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:29:48.176291  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:29:47.040255  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:49.043956  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:47.893875  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:49.894291  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:48.945086  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:48.961414  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:48.961491  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:49.010230  142411 cri.go:89] found id: ""
	I0420 01:29:49.010285  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.010299  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:49.010309  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:49.010385  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:49.054455  142411 cri.go:89] found id: ""
	I0420 01:29:49.054481  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.054491  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:49.054499  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:49.054566  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:49.094536  142411 cri.go:89] found id: ""
	I0420 01:29:49.094562  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.094572  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:49.094580  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:49.094740  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:49.134004  142411 cri.go:89] found id: ""
	I0420 01:29:49.134035  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.134046  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:49.134054  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:49.134118  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:49.173697  142411 cri.go:89] found id: ""
	I0420 01:29:49.173728  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.173741  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:49.173750  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:49.173817  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:49.215655  142411 cri.go:89] found id: ""
	I0420 01:29:49.215681  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.215689  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:49.215695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:49.215745  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:49.258282  142411 cri.go:89] found id: ""
	I0420 01:29:49.258312  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.258324  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:49.258332  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:49.258394  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:49.298565  142411 cri.go:89] found id: ""
	I0420 01:29:49.298597  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.298608  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:49.298620  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:49.298638  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:49.378833  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:49.378862  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:49.378880  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:49.467477  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:49.467517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:49.521747  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:49.521788  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:49.583386  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:49.583436  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:52.102969  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:52.122971  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:52.123053  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:52.166166  142411 cri.go:89] found id: ""
	I0420 01:29:52.166199  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.166210  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:52.166219  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:52.166287  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:52.206790  142411 cri.go:89] found id: ""
	I0420 01:29:52.206817  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.206824  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:52.206830  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:52.206889  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:52.249879  142411 cri.go:89] found id: ""
	I0420 01:29:52.249911  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.249921  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:52.249931  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:52.249997  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:52.293953  142411 cri.go:89] found id: ""
	I0420 01:29:52.293997  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.294009  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:52.294018  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:52.294095  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:52.339447  142411 cri.go:89] found id: ""
	I0420 01:29:52.339478  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.339490  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:52.339497  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:52.339558  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:52.378383  142411 cri.go:89] found id: ""
	I0420 01:29:52.378416  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.378428  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:52.378435  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:52.378488  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:52.423079  142411 cri.go:89] found id: ""
	I0420 01:29:52.423121  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.423130  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:52.423137  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:52.423205  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:52.459525  142411 cri.go:89] found id: ""
	I0420 01:29:52.459559  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.459572  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:52.459594  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:52.459610  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:52.567141  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:52.567186  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:52.618194  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:52.618235  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:52.681921  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:52.681959  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:52.699065  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:52.699108  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:52.776829  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:51.540922  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:54.043224  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:52.397218  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:54.895147  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:55.277933  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:55.293380  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:55.293455  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:55.337443  142411 cri.go:89] found id: ""
	I0420 01:29:55.337475  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.337483  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:55.337491  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:55.337557  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:55.375911  142411 cri.go:89] found id: ""
	I0420 01:29:55.375942  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.375951  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:55.375957  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:55.376022  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:55.418545  142411 cri.go:89] found id: ""
	I0420 01:29:55.418569  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.418577  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:55.418583  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:55.418635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:55.459343  142411 cri.go:89] found id: ""
	I0420 01:29:55.459378  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.459390  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:55.459397  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:55.459452  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:55.503851  142411 cri.go:89] found id: ""
	I0420 01:29:55.503878  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.503887  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:55.503895  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:55.503959  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:55.542533  142411 cri.go:89] found id: ""
	I0420 01:29:55.542556  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.542562  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:55.542568  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:55.542623  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:55.582205  142411 cri.go:89] found id: ""
	I0420 01:29:55.582236  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.582246  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:55.582252  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:55.582314  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:55.624727  142411 cri.go:89] found id: ""
	I0420 01:29:55.624757  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.624769  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:55.624781  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:55.624803  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:55.675403  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:55.675438  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:55.691492  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:55.691516  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:55.772283  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:55.772313  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:55.772330  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:55.859440  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:55.859477  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:56.543221  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:59.041874  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:57.393723  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:59.894390  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:58.406009  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:58.422305  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:58.422382  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:58.468206  142411 cri.go:89] found id: ""
	I0420 01:29:58.468303  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.468321  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:58.468329  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:58.468402  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:58.513981  142411 cri.go:89] found id: ""
	I0420 01:29:58.514018  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.514027  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:58.514041  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:58.514105  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:58.559967  142411 cri.go:89] found id: ""
	I0420 01:29:58.560000  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.560011  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:58.560019  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:58.560084  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:58.600710  142411 cri.go:89] found id: ""
	I0420 01:29:58.600744  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.600763  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:58.600771  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:58.600834  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:58.645995  142411 cri.go:89] found id: ""
	I0420 01:29:58.646022  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.646030  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:58.646036  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:58.646097  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:58.684930  142411 cri.go:89] found id: ""
	I0420 01:29:58.684957  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.684965  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:58.684972  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:58.685022  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:58.727225  142411 cri.go:89] found id: ""
	I0420 01:29:58.727251  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.727259  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:58.727265  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:58.727319  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:58.765244  142411 cri.go:89] found id: ""
	I0420 01:29:58.765282  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.765293  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:58.765303  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:58.765330  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:58.817791  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:58.817822  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:58.832882  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:58.832926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:58.919297  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:58.919325  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:58.919342  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:59.002590  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:59.002637  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:01.551854  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:01.568974  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:01.569054  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:01.609165  142411 cri.go:89] found id: ""
	I0420 01:30:01.609191  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.609200  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:01.609206  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:01.609272  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:01.653349  142411 cri.go:89] found id: ""
	I0420 01:30:01.653383  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.653396  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:01.653405  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:01.653482  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:01.698961  142411 cri.go:89] found id: ""
	I0420 01:30:01.698991  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.699002  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:01.699009  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:01.699063  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:01.739230  142411 cri.go:89] found id: ""
	I0420 01:30:01.739271  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.739283  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:01.739292  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:01.739376  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:01.781839  142411 cri.go:89] found id: ""
	I0420 01:30:01.781873  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.781885  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:01.781893  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:01.781960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:01.821212  142411 cri.go:89] found id: ""
	I0420 01:30:01.821241  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.821252  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:01.821259  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:01.821339  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:01.859959  142411 cri.go:89] found id: ""
	I0420 01:30:01.859984  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.859993  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:01.859999  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:01.860060  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:01.898832  142411 cri.go:89] found id: ""
	I0420 01:30:01.898858  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.898865  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:01.898875  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:01.898886  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:01.943065  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:01.943156  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:01.995618  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:01.995654  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:02.010489  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:02.010517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:02.090181  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:02.090222  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:02.090238  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:01.541135  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.041977  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:02.394456  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.894450  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.671376  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:04.687535  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:04.687629  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:04.728732  142411 cri.go:89] found id: ""
	I0420 01:30:04.728765  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.728778  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:04.728786  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:04.728854  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:04.768537  142411 cri.go:89] found id: ""
	I0420 01:30:04.768583  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.768602  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:04.768610  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:04.768676  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:04.811714  142411 cri.go:89] found id: ""
	I0420 01:30:04.811741  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.811750  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:04.811756  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:04.811816  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:04.852324  142411 cri.go:89] found id: ""
	I0420 01:30:04.852360  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.852371  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:04.852379  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:04.852452  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:04.891657  142411 cri.go:89] found id: ""
	I0420 01:30:04.891688  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.891700  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:04.891708  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:04.891774  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:04.933192  142411 cri.go:89] found id: ""
	I0420 01:30:04.933222  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.933230  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:04.933236  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:04.933291  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:04.972796  142411 cri.go:89] found id: ""
	I0420 01:30:04.972819  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.972828  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:04.972834  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:04.972888  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:05.014782  142411 cri.go:89] found id: ""
	I0420 01:30:05.014821  142411 logs.go:276] 0 containers: []
	W0420 01:30:05.014833  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:05.014846  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:05.014862  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:05.067438  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:05.067470  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:05.121336  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:05.121371  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:05.137495  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:05.137529  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:05.214132  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:05.214153  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:05.214170  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:07.796964  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:07.810856  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:07.810917  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:07.846993  142411 cri.go:89] found id: ""
	I0420 01:30:07.847024  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.847033  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:07.847040  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:07.847089  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:07.886422  142411 cri.go:89] found id: ""
	I0420 01:30:07.886452  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.886464  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:07.886474  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:07.886567  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:07.942200  142411 cri.go:89] found id: ""
	I0420 01:30:07.942230  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.942238  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:07.942245  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:07.942296  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:07.980179  142411 cri.go:89] found id: ""
	I0420 01:30:07.980215  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.980226  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:07.980235  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:07.980299  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:08.020097  142411 cri.go:89] found id: ""
	I0420 01:30:08.020130  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.020140  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:08.020145  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:08.020215  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:08.063793  142411 cri.go:89] found id: ""
	I0420 01:30:08.063837  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.063848  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:08.063857  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:08.063930  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:08.108674  142411 cri.go:89] found id: ""
	I0420 01:30:08.108705  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.108716  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:08.108724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:08.108798  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:08.147467  142411 cri.go:89] found id: ""
	I0420 01:30:08.147495  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.147503  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:08.147512  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:08.147525  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:08.239416  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:08.239466  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:08.294639  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:08.294669  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:08.349753  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:08.349795  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:08.368971  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:08.369003  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 01:30:06.540958  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:08.541701  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:06.898857  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:09.397590  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	W0420 01:30:08.449996  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:10.950318  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:10.964969  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:10.965032  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:11.006321  142411 cri.go:89] found id: ""
	I0420 01:30:11.006354  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.006365  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:11.006375  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:11.006437  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:11.047982  142411 cri.go:89] found id: ""
	I0420 01:30:11.048010  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.048019  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:11.048025  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:11.048073  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:11.089185  142411 cri.go:89] found id: ""
	I0420 01:30:11.089217  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.089226  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:11.089232  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:11.089287  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:11.131293  142411 cri.go:89] found id: ""
	I0420 01:30:11.131322  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.131335  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:11.131344  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:11.131398  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:11.170394  142411 cri.go:89] found id: ""
	I0420 01:30:11.170419  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.170427  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:11.170432  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:11.170485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:11.210580  142411 cri.go:89] found id: ""
	I0420 01:30:11.210619  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.210631  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:11.210640  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:11.210706  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:11.251938  142411 cri.go:89] found id: ""
	I0420 01:30:11.251977  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.251990  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:11.251998  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:11.252064  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:11.295999  142411 cri.go:89] found id: ""
	I0420 01:30:11.296033  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.296045  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:11.296057  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:11.296072  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:11.378564  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:11.378632  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:11.422836  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:11.422868  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:11.475893  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:11.475928  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:11.491524  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:11.491555  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:11.569066  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:11.041078  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:13.540339  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:15.541762  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:11.893724  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:14.394206  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:14.886464  142057 pod_ready.go:81] duration metric: took 4m0.00077804s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" ...
	E0420 01:30:14.886500  142057 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:30:14.886528  142057 pod_ready.go:38] duration metric: took 4m14.554070758s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:14.886572  142057 kubeadm.go:591] duration metric: took 4m22.173690393s to restartPrimaryControlPlane
	W0420 01:30:14.886657  142057 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:30:14.886691  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:30:14.070158  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:14.086000  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:14.086067  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:14.128864  142411 cri.go:89] found id: ""
	I0420 01:30:14.128894  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.128906  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:14.128914  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:14.128986  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:14.169447  142411 cri.go:89] found id: ""
	I0420 01:30:14.169482  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.169497  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:14.169506  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:14.169583  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:14.210007  142411 cri.go:89] found id: ""
	I0420 01:30:14.210043  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.210054  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:14.210062  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:14.210119  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:14.247652  142411 cri.go:89] found id: ""
	I0420 01:30:14.247685  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.247695  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:14.247703  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:14.247764  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:14.290788  142411 cri.go:89] found id: ""
	I0420 01:30:14.290820  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.290830  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:14.290847  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:14.290908  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:14.351514  142411 cri.go:89] found id: ""
	I0420 01:30:14.351548  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.351570  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:14.351581  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:14.351637  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:14.423481  142411 cri.go:89] found id: ""
	I0420 01:30:14.423520  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.423534  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:14.423543  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:14.423615  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:14.465597  142411 cri.go:89] found id: ""
	I0420 01:30:14.465622  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.465630  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:14.465639  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:14.465655  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:14.522669  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:14.522705  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:14.541258  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:14.541293  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:14.618657  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:14.618678  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:14.618691  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:14.702616  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:14.702658  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:17.256212  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:17.277171  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:17.277250  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:17.321548  142411 cri.go:89] found id: ""
	I0420 01:30:17.321582  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.321600  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:17.321607  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:17.321676  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:17.362856  142411 cri.go:89] found id: ""
	I0420 01:30:17.362883  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.362890  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:17.362896  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:17.362966  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:17.409494  142411 cri.go:89] found id: ""
	I0420 01:30:17.409525  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.409539  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:17.409548  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:17.409631  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:17.447759  142411 cri.go:89] found id: ""
	I0420 01:30:17.447801  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.447812  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:17.447819  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:17.447885  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:17.498416  142411 cri.go:89] found id: ""
	I0420 01:30:17.498444  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.498454  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:17.498460  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:17.498528  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:17.546025  142411 cri.go:89] found id: ""
	I0420 01:30:17.546055  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.546064  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:17.546072  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:17.546138  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:17.585797  142411 cri.go:89] found id: ""
	I0420 01:30:17.585829  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.585840  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:17.585848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:17.585919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:17.630850  142411 cri.go:89] found id: ""
	I0420 01:30:17.630886  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.630899  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:17.630911  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:17.630926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:17.689472  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:17.689510  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:17.705603  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:17.705642  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:17.794094  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:17.794137  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:17.794155  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:17.879397  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:17.879435  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:18.041437  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:20.044174  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:20.428142  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:20.444936  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:20.445018  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:20.487317  142411 cri.go:89] found id: ""
	I0420 01:30:20.487354  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.487365  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:20.487373  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:20.487443  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:20.537209  142411 cri.go:89] found id: ""
	I0420 01:30:20.537241  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.537254  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:20.537262  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:20.537348  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:20.584311  142411 cri.go:89] found id: ""
	I0420 01:30:20.584343  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.584352  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:20.584357  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:20.584413  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:20.631915  142411 cri.go:89] found id: ""
	I0420 01:30:20.631948  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.631959  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:20.631969  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:20.632040  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:20.679680  142411 cri.go:89] found id: ""
	I0420 01:30:20.679707  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.679716  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:20.679721  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:20.679770  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:20.724967  142411 cri.go:89] found id: ""
	I0420 01:30:20.725002  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.725013  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:20.725027  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:20.725091  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:20.772717  142411 cri.go:89] found id: ""
	I0420 01:30:20.772751  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.772762  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:20.772771  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:20.772837  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:20.812421  142411 cri.go:89] found id: ""
	I0420 01:30:20.812449  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.812460  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:20.812471  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:20.812485  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:20.870522  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:20.870554  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:20.886764  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:20.886793  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:20.963941  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:20.963964  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:20.963979  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:21.045738  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:21.045778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:20.850989  141927 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.674674204s)
	I0420 01:30:20.851082  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:20.868537  141927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:20.880284  141927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:20.891650  141927 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:20.891672  141927 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:20.891726  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0420 01:30:20.902443  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:20.902509  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:20.913476  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0420 01:30:20.923762  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:20.923836  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:20.934281  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0420 01:30:20.944194  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:20.944254  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:20.955506  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0420 01:30:20.968039  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:20.968107  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:20.978918  141927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:21.214688  141927 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:30:22.539778  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:24.543547  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:23.600037  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:23.616539  142411 kubeadm.go:591] duration metric: took 4m4.142686832s to restartPrimaryControlPlane
	W0420 01:30:23.616641  142411 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:30:23.616676  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:30:25.481285  142411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.864573977s)
	I0420 01:30:25.481385  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:25.500950  142411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:25.518624  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:25.532506  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:25.532531  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:25.532584  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:30:25.546634  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:25.546708  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:25.561379  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:30:25.575506  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:25.575627  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:25.590615  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:30:25.604855  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:25.604923  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:25.619717  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:30:25.634525  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:25.634607  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:25.649408  142411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:25.735636  142411 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:30:25.735697  142411 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:25.913199  142411 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:25.913347  142411 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:25.913483  142411 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:26.120240  142411 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:26.122066  142411 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:26.122169  142411 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:26.122279  142411 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:26.122395  142411 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:26.122499  142411 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:26.122623  142411 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:26.122715  142411 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:26.122806  142411 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:26.122898  142411 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:26.122999  142411 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:26.123113  142411 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:26.123173  142411 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:26.123244  142411 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:26.243908  142411 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:26.354349  142411 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:26.605778  142411 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:26.833914  142411 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:26.855348  142411 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:26.857029  142411 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:26.857250  142411 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:27.010707  142411 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:27.012314  142411 out.go:204]   - Booting up control plane ...
	I0420 01:30:27.012456  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:27.036284  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:27.049123  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:27.050561  142411 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:27.053222  142411 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:30:30.213456  141927 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:30:30.213557  141927 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:30.213687  141927 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:30.213826  141927 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:30.213915  141927 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:30.213978  141927 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:30.215501  141927 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:30.215594  141927 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:30.215667  141927 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:30.215802  141927 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:30.215886  141927 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:30.215960  141927 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:30.216018  141927 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:30.216097  141927 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:30.216156  141927 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:30.216258  141927 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:30.216350  141927 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:30.216385  141927 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:30.216447  141927 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:30.216517  141927 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:30.216589  141927 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:30:30.216653  141927 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:30.216743  141927 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:30.216832  141927 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:30.216933  141927 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:30.217019  141927 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:30.218228  141927 out.go:204]   - Booting up control plane ...
	I0420 01:30:30.218341  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:30.218446  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:30.218516  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:30.218615  141927 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:30.218703  141927 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:30.218753  141927 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:30.218904  141927 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:30:30.218975  141927 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:30:30.219027  141927 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001925972s
	I0420 01:30:30.219128  141927 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:30:30.219216  141927 kubeadm.go:309] [api-check] The API server is healthy after 5.502367015s
	I0420 01:30:30.219336  141927 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:30:30.219504  141927 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:30:30.219576  141927 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:30:30.219816  141927 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-907988 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:30:30.219880  141927 kubeadm.go:309] [bootstrap-token] Using token: ozlrl4.y5r3psi4bnl35gso
	I0420 01:30:30.221283  141927 out.go:204]   - Configuring RBAC rules ...
	I0420 01:30:30.221416  141927 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:30:30.221533  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:30:30.221728  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:30:30.221968  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:30:30.222146  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:30:30.222255  141927 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:30:30.222385  141927 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:30:30.222455  141927 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:30:30.222524  141927 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:30:30.222534  141927 kubeadm.go:309] 
	I0420 01:30:30.222614  141927 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:30:30.222628  141927 kubeadm.go:309] 
	I0420 01:30:30.222692  141927 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:30:30.222699  141927 kubeadm.go:309] 
	I0420 01:30:30.222723  141927 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:30:30.222772  141927 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:30:30.222815  141927 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:30:30.222821  141927 kubeadm.go:309] 
	I0420 01:30:30.222878  141927 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:30:30.222885  141927 kubeadm.go:309] 
	I0420 01:30:30.222923  141927 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:30:30.222929  141927 kubeadm.go:309] 
	I0420 01:30:30.222994  141927 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:30:30.223100  141927 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:30:30.223171  141927 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:30:30.223189  141927 kubeadm.go:309] 
	I0420 01:30:30.223281  141927 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:30:30.223346  141927 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:30:30.223354  141927 kubeadm.go:309] 
	I0420 01:30:30.223423  141927 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token ozlrl4.y5r3psi4bnl35gso \
	I0420 01:30:30.223527  141927 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:30:30.223552  141927 kubeadm.go:309] 	--control-plane 
	I0420 01:30:30.223559  141927 kubeadm.go:309] 
	I0420 01:30:30.223627  141927 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:30:30.223635  141927 kubeadm.go:309] 
	I0420 01:30:30.223704  141927 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token ozlrl4.y5r3psi4bnl35gso \
	I0420 01:30:30.223811  141927 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:30:30.223826  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:30:30.223833  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:30:30.225184  141927 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:30:27.041383  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:29.540967  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:30.226237  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:30:30.241388  141927 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:30:30.274356  141927 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:30:30.274469  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:30.274503  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-907988 minikube.k8s.io/updated_at=2024_04_20T01_30_30_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=default-k8s-diff-port-907988 minikube.k8s.io/primary=true
	I0420 01:30:30.319402  141927 ops.go:34] apiserver oom_adj: -16
	I0420 01:30:30.505362  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:31.006101  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:31.505679  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.005947  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.505747  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:33.005919  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:33.505449  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:34.006029  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.040710  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:34.541175  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:34.505846  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:35.006187  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:35.505618  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:36.005994  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:36.506217  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.006428  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.506359  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:38.006018  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:38.505454  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:39.006426  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.041157  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:39.542266  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:39.506227  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:40.005941  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:40.506123  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:41.006198  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:41.506244  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:42.006045  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:42.505458  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:43.006082  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:43.122481  141927 kubeadm.go:1107] duration metric: took 12.84807935s to wait for elevateKubeSystemPrivileges
	W0420 01:30:43.122525  141927 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:30:43.122535  141927 kubeadm.go:393] duration metric: took 5m11.83456536s to StartCluster
	I0420 01:30:43.122559  141927 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:30:43.122689  141927 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:30:43.124746  141927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:30:43.125059  141927 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:30:43.126572  141927 out.go:177] * Verifying Kubernetes components...
	I0420 01:30:43.125129  141927 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:30:43.125301  141927 config.go:182] Loaded profile config "default-k8s-diff-port-907988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:30:43.128187  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:30:43.128231  141927 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-907988"
	I0420 01:30:43.128240  141927 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-907988"
	I0420 01:30:43.128277  141927 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-907988"
	I0420 01:30:43.128278  141927 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-907988"
	W0420 01:30:43.128288  141927 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:30:43.128302  141927 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-907988"
	I0420 01:30:43.128352  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.128769  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.128795  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.128840  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.128800  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.128306  141927 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-907988"
	W0420 01:30:43.128994  141927 addons.go:243] addon metrics-server should already be in state true
	I0420 01:30:43.129026  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.129378  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.129401  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.148251  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41797
	I0420 01:30:43.148272  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39865
	I0420 01:30:43.148503  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33785
	I0420 01:30:43.148959  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.148985  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.149060  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.149605  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149626  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.149683  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149688  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149698  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.149706  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.150105  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150108  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150106  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150358  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.150703  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.150733  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.150760  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.150798  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.154242  141927 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-907988"
	W0420 01:30:43.154266  141927 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:30:43.154300  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.154673  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.154715  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.167283  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46477
	I0420 01:30:43.167925  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.168475  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.168496  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.168868  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.169094  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.171067  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45101
	I0420 01:30:43.171384  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.173102  141927 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:30:43.171760  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.172823  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I0420 01:30:43.174639  141927 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:30:43.174661  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:30:43.174681  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.174859  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.175307  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.175331  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.175460  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.175476  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.175799  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.175992  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.176361  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.176376  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.176686  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.178744  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.178848  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.180048  141927 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:30:43.179462  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.181257  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:30:43.181275  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:30:43.181289  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.181296  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.179641  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.182168  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.182437  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.182627  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.184562  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.184958  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.184985  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.185241  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.185430  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.185621  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.185771  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.195778  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I0420 01:30:43.196419  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.196979  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.197002  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.197763  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.198072  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.200177  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.200480  141927 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:30:43.200497  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:30:43.200516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.204078  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.204128  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.204154  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.204178  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.204275  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.204456  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.204582  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.375731  141927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:30:43.424911  141927 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-907988" to be "Ready" ...
	I0420 01:30:43.436729  141927 node_ready.go:49] node "default-k8s-diff-port-907988" has status "Ready":"True"
	I0420 01:30:43.436750  141927 node_ready.go:38] duration metric: took 11.810027ms for node "default-k8s-diff-port-907988" to be "Ready" ...
	I0420 01:30:43.436759  141927 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:43.445452  141927 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:43.497224  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:30:43.526236  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:30:43.527573  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:30:43.527597  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:30:43.591844  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:30:43.591872  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:30:43.655692  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:30:43.655721  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:30:43.824523  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:30:44.808651  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.311370016s)
	I0420 01:30:44.808721  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.808724  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.282444767s)
	I0420 01:30:44.808735  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.808767  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.808783  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809052  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809066  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809074  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.809081  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809144  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809162  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809170  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.809179  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809626  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809635  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809647  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809655  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809626  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Closing plugin on server side
	I0420 01:30:44.833935  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.833963  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.834326  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.834348  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.316084  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.491512905s)
	I0420 01:30:45.316157  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:45.316177  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:45.316514  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:45.316539  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.316593  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:45.316610  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:45.316910  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:45.316989  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.317007  141927 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-907988"
	I0420 01:30:45.316906  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Closing plugin on server side
	I0420 01:30:45.319289  141927 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0420 01:30:42.040865  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:44.042663  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:45.320468  141927 addons.go:505] duration metric: took 2.195343987s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0420 01:30:45.453717  141927 pod_ready.go:102] pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:45.952010  141927 pod_ready.go:92] pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.952032  141927 pod_ready.go:81] duration metric: took 2.506556645s for pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.952040  141927 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.957512  141927 pod_ready.go:92] pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.957533  141927 pod_ready.go:81] duration metric: took 5.486362ms for pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.957541  141927 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.962790  141927 pod_ready.go:92] pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.962810  141927 pod_ready.go:81] duration metric: took 5.261485ms for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.962821  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.968720  141927 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.968743  141927 pod_ready.go:81] duration metric: took 5.914425ms for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.968754  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.976930  141927 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.976946  141927 pod_ready.go:81] duration metric: took 8.183898ms for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.976954  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jt8wr" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.350179  141927 pod_ready.go:92] pod "kube-proxy-jt8wr" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:46.350203  141927 pod_ready.go:81] duration metric: took 373.241134ms for pod "kube-proxy-jt8wr" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.350212  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.749542  141927 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:46.749566  141927 pod_ready.go:81] duration metric: took 399.34726ms for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.749573  141927 pod_ready.go:38] duration metric: took 3.312805349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:46.749587  141927 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:30:46.749647  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:46.785318  141927 api_server.go:72] duration metric: took 3.660207577s to wait for apiserver process to appear ...
	I0420 01:30:46.785349  141927 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:30:46.785373  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:30:46.793933  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 200:
	ok
	I0420 01:30:46.794890  141927 api_server.go:141] control plane version: v1.30.0
	I0420 01:30:46.794911  141927 api_server.go:131] duration metric: took 9.555146ms to wait for apiserver health ...
	I0420 01:30:46.794920  141927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:30:46.953036  141927 system_pods.go:59] 9 kube-system pods found
	I0420 01:30:46.953066  141927 system_pods.go:61] "coredns-7db6d8ff4d-g2nzn" [d07ba546-0251-4862-ad1b-0c3d5ee7b1f3] Running
	I0420 01:30:46.953070  141927 system_pods.go:61] "coredns-7db6d8ff4d-p8dhp" [4bf589b6-f54b-4615-b95e-b95c89766e24] Running
	I0420 01:30:46.953074  141927 system_pods.go:61] "etcd-default-k8s-diff-port-907988" [f2711b7c-9d31-4586-bcf0-345ef2c9e62a] Running
	I0420 01:30:46.953077  141927 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-907988" [7a4fccc8-90d5-4467-8925-df5d8e1e128a] Running
	I0420 01:30:46.953081  141927 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-907988" [68350b12-3244-4565-ab06-6d7ad5876935] Running
	I0420 01:30:46.953085  141927 system_pods.go:61] "kube-proxy-jt8wr" [a9ddf3ce-29f8-437d-bd31-89411c135012] Running
	I0420 01:30:46.953088  141927 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-907988" [f0ff044b-0c2a-4105-9373-34abfbf6b68a] Running
	I0420 01:30:46.953094  141927 system_pods.go:61] "metrics-server-569cc877fc-6rgpj" [70cba472-11c4-4604-a4ad-3575ccedf005] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:30:46.953098  141927 system_pods.go:61] "storage-provisioner" [739478ce-5d74-4be0-8a39-d80245d8aa8a] Running
	I0420 01:30:46.953108  141927 system_pods.go:74] duration metric: took 158.182751ms to wait for pod list to return data ...
	I0420 01:30:46.953116  141927 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:30:47.151205  141927 default_sa.go:45] found service account: "default"
	I0420 01:30:47.151245  141927 default_sa.go:55] duration metric: took 198.121475ms for default service account to be created ...
	I0420 01:30:47.151274  141927 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:30:47.354321  141927 system_pods.go:86] 9 kube-system pods found
	I0420 01:30:47.354348  141927 system_pods.go:89] "coredns-7db6d8ff4d-g2nzn" [d07ba546-0251-4862-ad1b-0c3d5ee7b1f3] Running
	I0420 01:30:47.354353  141927 system_pods.go:89] "coredns-7db6d8ff4d-p8dhp" [4bf589b6-f54b-4615-b95e-b95c89766e24] Running
	I0420 01:30:47.354358  141927 system_pods.go:89] "etcd-default-k8s-diff-port-907988" [f2711b7c-9d31-4586-bcf0-345ef2c9e62a] Running
	I0420 01:30:47.354364  141927 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-907988" [7a4fccc8-90d5-4467-8925-df5d8e1e128a] Running
	I0420 01:30:47.354369  141927 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-907988" [68350b12-3244-4565-ab06-6d7ad5876935] Running
	I0420 01:30:47.354373  141927 system_pods.go:89] "kube-proxy-jt8wr" [a9ddf3ce-29f8-437d-bd31-89411c135012] Running
	I0420 01:30:47.354376  141927 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-907988" [f0ff044b-0c2a-4105-9373-34abfbf6b68a] Running
	I0420 01:30:47.354383  141927 system_pods.go:89] "metrics-server-569cc877fc-6rgpj" [70cba472-11c4-4604-a4ad-3575ccedf005] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:30:47.354387  141927 system_pods.go:89] "storage-provisioner" [739478ce-5d74-4be0-8a39-d80245d8aa8a] Running
	I0420 01:30:47.354395  141927 system_pods.go:126] duration metric: took 203.115923ms to wait for k8s-apps to be running ...
	I0420 01:30:47.354403  141927 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:30:47.354452  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:47.370946  141927 system_svc.go:56] duration metric: took 16.532953ms WaitForService to wait for kubelet
	I0420 01:30:47.370977  141927 kubeadm.go:576] duration metric: took 4.245884115s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:30:47.370997  141927 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:30:47.550097  141927 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:30:47.550127  141927 node_conditions.go:123] node cpu capacity is 2
	I0420 01:30:47.550138  141927 node_conditions.go:105] duration metric: took 179.136105ms to run NodePressure ...
	I0420 01:30:47.550150  141927 start.go:240] waiting for startup goroutines ...
	I0420 01:30:47.550156  141927 start.go:245] waiting for cluster config update ...
	I0420 01:30:47.550167  141927 start.go:254] writing updated cluster config ...
	I0420 01:30:47.550493  141927 ssh_runner.go:195] Run: rm -f paused
	I0420 01:30:47.614715  141927 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:30:47.616658  141927 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-907988" cluster and "default" namespace by default
	I0420 01:30:47.623645  142057 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.736926697s)
	I0420 01:30:47.623716  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:47.648132  142057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:47.662521  142057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:47.674241  142057 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:47.674265  142057 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:47.674311  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:30:47.684981  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:47.685037  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:47.696549  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:30:47.706838  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:47.706885  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:47.717387  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:30:47.732194  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:47.732252  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:47.743425  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:30:47.756579  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:47.756629  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:47.769210  142057 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:47.832909  142057 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:30:47.832972  142057 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:47.987090  142057 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:47.987209  142057 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:47.987380  142057 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:48.253287  142057 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:48.255451  142057 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:48.255552  142057 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:48.255657  142057 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:48.255767  142057 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:48.255880  142057 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:48.255992  142057 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:48.256076  142057 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:48.256170  142057 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:48.256250  142057 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:48.256344  142057 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:48.256445  142057 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:48.256500  142057 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:48.256563  142057 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:48.346357  142057 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:48.602240  142057 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:30:48.741597  142057 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:49.086311  142057 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:49.284340  142057 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:49.284671  142057 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:49.287663  142057 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:46.540199  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:48.540848  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:50.541579  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:49.289305  142057 out.go:204]   - Booting up control plane ...
	I0420 01:30:49.289430  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:49.289558  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:49.289646  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:49.309520  142057 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:49.311328  142057 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:49.311389  142057 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:49.448766  142057 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:30:49.448889  142057 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:30:49.950225  142057 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.460713ms
	I0420 01:30:49.950316  142057 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:30:55.452587  142057 kubeadm.go:309] [api-check] The API server is healthy after 5.502061843s
	I0420 01:30:55.466768  142057 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:30:55.500892  142057 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:30:55.538376  142057 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:30:55.538631  142057 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-269507 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:30:55.559344  142057 kubeadm.go:309] [bootstrap-token] Using token: jtn2hn.nnhc9vssv65463xy
	I0420 01:30:52.542748  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:55.040878  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:55.560872  142057 out.go:204]   - Configuring RBAC rules ...
	I0420 01:30:55.561022  142057 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:30:55.575617  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:30:55.583307  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:30:55.586398  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:30:55.596138  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:30:55.599717  142057 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:30:55.861367  142057 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:30:56.310991  142057 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:30:56.860904  142057 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:30:56.860939  142057 kubeadm.go:309] 
	I0420 01:30:56.861051  142057 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:30:56.861077  142057 kubeadm.go:309] 
	I0420 01:30:56.861180  142057 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:30:56.861201  142057 kubeadm.go:309] 
	I0420 01:30:56.861232  142057 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:30:56.861345  142057 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:30:56.861438  142057 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:30:56.861454  142057 kubeadm.go:309] 
	I0420 01:30:56.861534  142057 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:30:56.861544  142057 kubeadm.go:309] 
	I0420 01:30:56.861628  142057 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:30:56.861644  142057 kubeadm.go:309] 
	I0420 01:30:56.861728  142057 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:30:56.861822  142057 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:30:56.861895  142057 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:30:56.861923  142057 kubeadm.go:309] 
	I0420 01:30:56.862120  142057 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:30:56.862228  142057 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:30:56.862246  142057 kubeadm.go:309] 
	I0420 01:30:56.862371  142057 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jtn2hn.nnhc9vssv65463xy \
	I0420 01:30:56.862532  142057 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:30:56.862571  142057 kubeadm.go:309] 	--control-plane 
	I0420 01:30:56.862580  142057 kubeadm.go:309] 
	I0420 01:30:56.862700  142057 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:30:56.862724  142057 kubeadm.go:309] 
	I0420 01:30:56.862827  142057 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jtn2hn.nnhc9vssv65463xy \
	I0420 01:30:56.862955  142057 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:30:56.863259  142057 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:30:56.863343  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:30:56.863358  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:30:56.865193  142057 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:30:57.541555  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:00.040222  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:56.866515  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:30:56.880013  142057 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:30:56.900677  142057 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:30:56.900773  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:56.900809  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-269507 minikube.k8s.io/updated_at=2024_04_20T01_30_56_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=embed-certs-269507 minikube.k8s.io/primary=true
	I0420 01:30:56.942362  142057 ops.go:34] apiserver oom_adj: -16
	I0420 01:30:57.124807  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:57.625201  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:58.125867  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:58.625845  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:59.124923  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:59.625004  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:00.125467  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:00.625081  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:01.125446  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.539751  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:04.540090  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:01.625279  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.125084  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.625048  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:03.125567  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:03.625428  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:04.125592  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:04.625874  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:05.125031  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:05.625698  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:06.125620  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.054009  142411 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:31:07.054375  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:07.054708  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:06.625682  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.125909  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.625563  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:08.125451  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:08.625265  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.125677  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.625433  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.720318  142057 kubeadm.go:1107] duration metric: took 12.81961115s to wait for elevateKubeSystemPrivileges
	W0420 01:31:09.720362  142057 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:31:09.720373  142057 kubeadm.go:393] duration metric: took 5m17.067399347s to StartCluster
	I0420 01:31:09.720426  142057 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:31:09.720552  142057 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:31:09.722646  142057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:31:09.722904  142057 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:31:09.724771  142057 out.go:177] * Verifying Kubernetes components...
	I0420 01:31:09.722979  142057 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:31:09.723175  142057 config.go:182] Loaded profile config "embed-certs-269507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:31:09.724863  142057 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-269507"
	I0420 01:31:09.726208  142057 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-269507"
	W0420 01:31:09.726229  142057 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:31:09.724870  142057 addons.go:69] Setting default-storageclass=true in profile "embed-certs-269507"
	I0420 01:31:09.726270  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.726289  142057 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-269507"
	I0420 01:31:09.724889  142057 addons.go:69] Setting metrics-server=true in profile "embed-certs-269507"
	I0420 01:31:09.726351  142057 addons.go:234] Setting addon metrics-server=true in "embed-certs-269507"
	W0420 01:31:09.726365  142057 addons.go:243] addon metrics-server should already be in state true
	I0420 01:31:09.726395  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.726159  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:31:09.726699  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726737  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726771  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726785  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.726803  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.726793  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.742932  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41221
	I0420 01:31:09.743143  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I0420 01:31:09.743375  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.743666  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.743951  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.743968  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.744102  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.744120  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.744439  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.744497  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.745152  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.745162  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.745178  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.745195  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.745923  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40633
	I0420 01:31:09.746441  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.747173  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.747202  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.747637  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.747934  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.751736  142057 addons.go:234] Setting addon default-storageclass=true in "embed-certs-269507"
	W0420 01:31:09.751760  142057 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:31:09.751791  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.752174  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.752199  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.763296  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40627
	I0420 01:31:09.763475  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41617
	I0420 01:31:09.764103  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.764119  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.764635  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.764656  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.764807  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.764821  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.765353  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.765369  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.765562  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.766352  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.767675  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.769455  142057 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:31:09.768866  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.770529  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:31:09.770596  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:31:09.770618  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.771959  142057 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:31:07.039635  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:09.040381  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:09.772109  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
	I0420 01:31:09.773531  142057 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:31:09.773545  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:31:09.773560  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.773989  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.774697  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.774711  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.774889  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.775069  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.775522  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.775550  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.775770  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.775840  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.775855  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.775973  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.776144  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.776283  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.776967  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.777306  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.777376  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.777621  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.777811  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.777949  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.778092  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.791609  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37301
	I0420 01:31:09.792008  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.792475  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.792492  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.792811  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.793110  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.794743  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.795008  142057 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:31:09.795023  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:31:09.795037  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.797655  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.798120  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.798144  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.798394  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.798603  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.798745  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.798888  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.957088  142057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:31:10.012344  142057 node_ready.go:35] waiting up to 6m0s for node "embed-certs-269507" to be "Ready" ...
	I0420 01:31:10.023887  142057 node_ready.go:49] node "embed-certs-269507" has status "Ready":"True"
	I0420 01:31:10.023917  142057 node_ready.go:38] duration metric: took 11.536403ms for node "embed-certs-269507" to be "Ready" ...
	I0420 01:31:10.023929  142057 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:10.035096  142057 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:10.210022  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:31:10.222715  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:31:10.251807  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:31:10.251836  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:31:10.342638  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:31:10.342664  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:31:10.480676  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:31:10.480700  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:31:10.655186  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:31:11.331066  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.121005107s)
	I0420 01:31:11.331125  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.108375538s)
	I0420 01:31:11.331139  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331152  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331165  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331181  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331530  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331601  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331611  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331641  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331664  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331681  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331684  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331692  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331699  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331646  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331932  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331959  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331979  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331991  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331989  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.332003  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.364269  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.364296  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.364641  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.364667  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.364671  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.809229  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.154002194s)
	I0420 01:31:11.809282  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.809301  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.809618  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.809676  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.809688  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.809705  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.809717  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.809954  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.809983  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.810001  142057 addons.go:470] Verifying addon metrics-server=true in "embed-certs-269507"
	I0420 01:31:11.810004  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.811610  142057 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0420 01:31:12.055506  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:12.055793  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:11.813049  142057 addons.go:505] duration metric: took 2.090078148s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0420 01:31:12.044618  142057 pod_ready.go:102] pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:12.565519  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.565543  142057 pod_ready.go:81] duration metric: took 2.530392572s for pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.565552  142057 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.577986  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.578011  142057 pod_ready.go:81] duration metric: took 12.452506ms for pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.578020  142057 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.595104  142057 pod_ready.go:92] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.595129  142057 pod_ready.go:81] duration metric: took 17.103577ms for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.595139  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.602502  142057 pod_ready.go:92] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.602524  142057 pod_ready.go:81] duration metric: took 7.377832ms for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.602538  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.608443  142057 pod_ready.go:92] pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.608462  142057 pod_ready.go:81] duration metric: took 5.916781ms for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.608471  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4x66x" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.939418  142057 pod_ready.go:92] pod "kube-proxy-4x66x" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.939444  142057 pod_ready.go:81] duration metric: took 330.966964ms for pod "kube-proxy-4x66x" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.939454  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:13.341528  142057 pod_ready.go:92] pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:13.341556  142057 pod_ready.go:81] duration metric: took 402.093841ms for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:13.341565  142057 pod_ready.go:38] duration metric: took 3.317622631s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:13.341583  142057 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:31:13.341648  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:31:13.361938  142057 api_server.go:72] duration metric: took 3.638999445s to wait for apiserver process to appear ...
	I0420 01:31:13.361967  142057 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:31:13.361987  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:31:13.367149  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0420 01:31:13.368215  142057 api_server.go:141] control plane version: v1.30.0
	I0420 01:31:13.368243  142057 api_server.go:131] duration metric: took 6.268859ms to wait for apiserver health ...
	I0420 01:31:13.368254  142057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:31:13.545177  142057 system_pods.go:59] 9 kube-system pods found
	I0420 01:31:13.545203  142057 system_pods.go:61] "coredns-7db6d8ff4d-ltzhp" [fca2da30-b908-46fc-a028-d43a17c6307e] Running
	I0420 01:31:13.545207  142057 system_pods.go:61] "coredns-7db6d8ff4d-mpf5l" [331105fe-dd08-409f-9b2d-658b958cd1a2] Running
	I0420 01:31:13.545212  142057 system_pods.go:61] "etcd-embed-certs-269507" [7dc38a73-8614-42d0-afb5-f2ffdbb8ef1b] Running
	I0420 01:31:13.545215  142057 system_pods.go:61] "kube-apiserver-embed-certs-269507" [c6741448-01ad-4be4-a120-c69b27fbc818] Running
	I0420 01:31:13.545219  142057 system_pods.go:61] "kube-controller-manager-embed-certs-269507" [003fc040-4032-4ff8-99af-71305dae664c] Running
	I0420 01:31:13.545222  142057 system_pods.go:61] "kube-proxy-4x66x" [75da8306-56f8-49bf-a2e7-cf5d4877dc16] Running
	I0420 01:31:13.545224  142057 system_pods.go:61] "kube-scheduler-embed-certs-269507" [86a64ec5-dd53-4702-9dea-8dbab58b38e3] Running
	I0420 01:31:13.545230  142057 system_pods.go:61] "metrics-server-569cc877fc-jwbst" [4d13a078-f3cd-43c2-8f15-fe5c36445294] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:31:13.545233  142057 system_pods.go:61] "storage-provisioner" [8eee97ab-bb31-4a3d-be80-845b6545e897] Running
	I0420 01:31:13.545242  142057 system_pods.go:74] duration metric: took 176.980813ms to wait for pod list to return data ...
	I0420 01:31:13.545249  142057 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:31:13.739865  142057 default_sa.go:45] found service account: "default"
	I0420 01:31:13.739892  142057 default_sa.go:55] duration metric: took 194.636223ms for default service account to be created ...
	I0420 01:31:13.739903  142057 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:31:13.942758  142057 system_pods.go:86] 9 kube-system pods found
	I0420 01:31:13.942785  142057 system_pods.go:89] "coredns-7db6d8ff4d-ltzhp" [fca2da30-b908-46fc-a028-d43a17c6307e] Running
	I0420 01:31:13.942793  142057 system_pods.go:89] "coredns-7db6d8ff4d-mpf5l" [331105fe-dd08-409f-9b2d-658b958cd1a2] Running
	I0420 01:31:13.942801  142057 system_pods.go:89] "etcd-embed-certs-269507" [7dc38a73-8614-42d0-afb5-f2ffdbb8ef1b] Running
	I0420 01:31:13.942812  142057 system_pods.go:89] "kube-apiserver-embed-certs-269507" [c6741448-01ad-4be4-a120-c69b27fbc818] Running
	I0420 01:31:13.942819  142057 system_pods.go:89] "kube-controller-manager-embed-certs-269507" [003fc040-4032-4ff8-99af-71305dae664c] Running
	I0420 01:31:13.942829  142057 system_pods.go:89] "kube-proxy-4x66x" [75da8306-56f8-49bf-a2e7-cf5d4877dc16] Running
	I0420 01:31:13.942835  142057 system_pods.go:89] "kube-scheduler-embed-certs-269507" [86a64ec5-dd53-4702-9dea-8dbab58b38e3] Running
	I0420 01:31:13.942846  142057 system_pods.go:89] "metrics-server-569cc877fc-jwbst" [4d13a078-f3cd-43c2-8f15-fe5c36445294] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:31:13.942854  142057 system_pods.go:89] "storage-provisioner" [8eee97ab-bb31-4a3d-be80-845b6545e897] Running
	I0420 01:31:13.942863  142057 system_pods.go:126] duration metric: took 202.954629ms to wait for k8s-apps to be running ...
	I0420 01:31:13.942873  142057 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:31:13.942926  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:31:13.962754  142057 system_svc.go:56] duration metric: took 19.872903ms WaitForService to wait for kubelet
	I0420 01:31:13.962781  142057 kubeadm.go:576] duration metric: took 4.239850872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:31:13.962802  142057 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:31:14.139800  142057 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:31:14.139834  142057 node_conditions.go:123] node cpu capacity is 2
	I0420 01:31:14.139848  142057 node_conditions.go:105] duration metric: took 177.041675ms to run NodePressure ...
	I0420 01:31:14.139862  142057 start.go:240] waiting for startup goroutines ...
	I0420 01:31:14.139872  142057 start.go:245] waiting for cluster config update ...
	I0420 01:31:14.139886  142057 start.go:254] writing updated cluster config ...
	I0420 01:31:14.140201  142057 ssh_runner.go:195] Run: rm -f paused
	I0420 01:31:14.190985  142057 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:31:14.193207  142057 out.go:177] * Done! kubectl is now configured to use "embed-certs-269507" cluster and "default" namespace by default
	I0420 01:31:11.040724  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:13.043491  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:15.540182  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:17.540894  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:19.541858  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:22.056094  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:22.056315  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:22.039484  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:24.043137  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:26.043262  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:28.540379  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:30.540568  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:32.543371  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:35.040187  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:37.541354  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:40.039779  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:42.057024  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:42.057278  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:42.040147  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:44.540170  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:46.540576  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:48.543604  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:51.034230  141746 pod_ready.go:81] duration metric: took 4m0.001077028s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" ...
	E0420 01:31:51.034258  141746 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:31:51.034280  141746 pod_ready.go:38] duration metric: took 4m12.046687249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:51.034308  141746 kubeadm.go:591] duration metric: took 4m55.947094434s to restartPrimaryControlPlane
	W0420 01:31:51.034367  141746 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:31:51.034400  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:32:22.058965  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:32:22.059213  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:32:22.059231  142411 kubeadm.go:309] 
	I0420 01:32:22.059284  142411 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:32:22.059341  142411 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:32:22.059351  142411 kubeadm.go:309] 
	I0420 01:32:22.059398  142411 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:32:22.059449  142411 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:32:22.059581  142411 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:32:22.059606  142411 kubeadm.go:309] 
	I0420 01:32:22.059693  142411 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:32:22.059725  142411 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:32:22.059796  142411 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:32:22.059821  142411 kubeadm.go:309] 
	I0420 01:32:22.059916  142411 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:32:22.060046  142411 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:32:22.060068  142411 kubeadm.go:309] 
	I0420 01:32:22.060225  142411 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:32:22.060371  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:32:22.060498  142411 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:32:22.060624  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:32:22.060643  142411 kubeadm.go:309] 
	I0420 01:32:22.061155  142411 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:22.061294  142411 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:32:22.061403  142411 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0420 01:32:22.061569  142411 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0420 01:32:22.061628  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:32:23.211059  142411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.149398853s)
	I0420 01:32:23.211147  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:23.228140  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:32:23.240832  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:32:23.240868  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:32:23.240912  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:32:23.252674  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:32:23.252735  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:32:23.264128  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:32:23.274998  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:32:23.275059  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:32:23.286449  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.297377  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:32:23.297452  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.308971  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:32:23.320775  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:32:23.320842  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:32:23.333601  142411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:32:23.490252  141746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.455825605s)
	I0420 01:32:23.490330  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:23.515027  141746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:32:23.528835  141746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:32:23.542901  141746 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:32:23.542927  141746 kubeadm.go:156] found existing configuration files:
	
	I0420 01:32:23.542969  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:32:23.554931  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:32:23.555006  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:32:23.570665  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:32:23.583505  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:32:23.583576  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:32:23.595835  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.607468  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:32:23.607538  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.620629  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:32:23.634141  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:32:23.634222  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:32:23.648360  141746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:32:23.727697  141746 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:32:23.727825  141746 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:32:23.899280  141746 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:32:23.899376  141746 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:32:23.899456  141746 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:32:24.139299  141746 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:32:24.141410  141746 out.go:204]   - Generating certificates and keys ...
	I0420 01:32:24.141522  141746 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:32:24.141618  141746 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:32:24.141719  141746 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:32:24.141814  141746 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:32:24.141912  141746 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:32:24.141987  141746 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:32:24.142076  141746 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:32:24.142172  141746 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:32:24.142348  141746 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:32:24.142589  141746 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:32:24.142757  141746 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:32:24.142990  141746 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:32:24.247270  141746 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:32:24.326535  141746 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:32:24.538489  141746 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:32:24.594810  141746 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:32:24.712812  141746 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:32:24.713304  141746 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:32:24.719376  141746 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:32:24.721510  141746 out.go:204]   - Booting up control plane ...
	I0420 01:32:24.721649  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:32:24.721781  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:32:24.722470  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:32:24.748410  141746 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:32:24.750247  141746 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:32:24.750320  141746 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:32:24.906734  141746 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:32:24.906859  141746 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:32:25.409625  141746 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.844847ms
	I0420 01:32:25.409771  141746 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:32:23.603058  142411 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:30.912062  141746 kubeadm.go:309] [api-check] The API server is healthy after 5.502434175s
	I0420 01:32:30.935231  141746 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:32:30.954860  141746 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:32:30.990255  141746 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:32:30.990480  141746 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-338118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:32:31.004218  141746 kubeadm.go:309] [bootstrap-token] Using token: 6ub3et.0wyu42zodual4kt8
	I0420 01:32:31.005771  141746 out.go:204]   - Configuring RBAC rules ...
	I0420 01:32:31.005875  141746 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:32:31.011978  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:32:31.020750  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:32:31.024958  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:32:31.032499  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:32:31.037128  141746 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:32:31.320324  141746 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:32:31.761773  141746 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:32:32.322540  141746 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:32:32.322563  141746 kubeadm.go:309] 
	I0420 01:32:32.322633  141746 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:32:32.322648  141746 kubeadm.go:309] 
	I0420 01:32:32.322728  141746 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:32:32.322737  141746 kubeadm.go:309] 
	I0420 01:32:32.322763  141746 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:32:32.322833  141746 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:32:32.322906  141746 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:32:32.322918  141746 kubeadm.go:309] 
	I0420 01:32:32.323005  141746 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:32:32.323015  141746 kubeadm.go:309] 
	I0420 01:32:32.323083  141746 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:32:32.323110  141746 kubeadm.go:309] 
	I0420 01:32:32.323184  141746 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:32:32.323304  141746 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:32:32.323362  141746 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:32:32.323372  141746 kubeadm.go:309] 
	I0420 01:32:32.323522  141746 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:32:32.323660  141746 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:32:32.323677  141746 kubeadm.go:309] 
	I0420 01:32:32.323765  141746 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6ub3et.0wyu42zodual4kt8 \
	I0420 01:32:32.323916  141746 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:32:32.323948  141746 kubeadm.go:309] 	--control-plane 
	I0420 01:32:32.323957  141746 kubeadm.go:309] 
	I0420 01:32:32.324035  141746 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:32:32.324049  141746 kubeadm.go:309] 
	I0420 01:32:32.324201  141746 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6ub3et.0wyu42zodual4kt8 \
	I0420 01:32:32.324348  141746 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:32:32.324967  141746 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:32.325210  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:32:32.325228  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:32:32.327624  141746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:32:32.329029  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:32:32.344181  141746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:32:32.368978  141746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:32:32.369052  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:32.369086  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-338118 minikube.k8s.io/updated_at=2024_04_20T01_32_32_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=no-preload-338118 minikube.k8s.io/primary=true
	I0420 01:32:32.579160  141746 ops.go:34] apiserver oom_adj: -16
	I0420 01:32:32.579218  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:33.079458  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:33.579498  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:34.079957  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:34.579520  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:35.079902  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:35.579955  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:36.079525  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:36.579612  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:37.079831  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:37.579989  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:38.079481  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:38.579798  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:39.080239  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:39.579654  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:40.080267  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:40.579837  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:41.079840  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:41.579347  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:42.079368  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:42.579641  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:43.079257  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:43.579647  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.079317  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.580002  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.698993  141746 kubeadm.go:1107] duration metric: took 12.330007154s to wait for elevateKubeSystemPrivileges
	W0420 01:32:44.699036  141746 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:32:44.699045  141746 kubeadm.go:393] duration metric: took 5m49.674421659s to StartCluster
	I0420 01:32:44.699064  141746 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:32:44.699166  141746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:32:44.700731  141746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:32:44.700982  141746 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:32:44.702752  141746 out.go:177] * Verifying Kubernetes components...
	I0420 01:32:44.701040  141746 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:32:44.701201  141746 config.go:182] Loaded profile config "no-preload-338118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:32:44.704065  141746 addons.go:69] Setting storage-provisioner=true in profile "no-preload-338118"
	I0420 01:32:44.704078  141746 addons.go:69] Setting metrics-server=true in profile "no-preload-338118"
	I0420 01:32:44.704077  141746 addons.go:69] Setting default-storageclass=true in profile "no-preload-338118"
	I0420 01:32:44.704099  141746 addons.go:234] Setting addon storage-provisioner=true in "no-preload-338118"
	W0420 01:32:44.704105  141746 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:32:44.704114  141746 addons.go:234] Setting addon metrics-server=true in "no-preload-338118"
	I0420 01:32:44.704113  141746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-338118"
	W0420 01:32:44.704124  141746 addons.go:243] addon metrics-server should already be in state true
	I0420 01:32:44.704151  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.704157  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.704069  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:32:44.704452  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704485  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.704503  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704521  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704535  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.704545  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.720663  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34001
	I0420 01:32:44.720685  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0420 01:32:44.721210  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.721222  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.721746  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.721766  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.721901  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.721925  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.722282  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.722311  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.722860  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.722860  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.722889  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.722914  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.723194  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39919
	I0420 01:32:44.723775  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.724401  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.724427  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.724790  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.724975  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.728728  141746 addons.go:234] Setting addon default-storageclass=true in "no-preload-338118"
	W0420 01:32:44.728751  141746 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:32:44.728780  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.729136  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.729161  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.738505  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37139
	I0420 01:32:44.738893  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.739388  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.739409  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.739916  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.740120  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.741929  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37217
	I0420 01:32:44.742090  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.744131  141746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:32:44.742538  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.745561  141746 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:32:44.745579  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:32:44.745597  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.744662  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.745640  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.745994  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.746345  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.747491  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0420 01:32:44.747878  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.748594  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.748731  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.748752  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.750445  141746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:32:44.749050  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.749380  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.749990  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.752010  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:32:44.752029  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:32:44.752046  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.752131  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.752155  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.752307  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.752479  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.752647  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.752676  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.752676  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.754727  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.755188  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.755216  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.755497  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.755696  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.755866  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.756034  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.768442  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I0420 01:32:44.768887  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.769453  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.769473  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.769852  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.770359  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.772155  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.772443  141746 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:32:44.772651  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:32:44.772686  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.775775  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.776177  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.776205  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.776313  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.776492  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.776667  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.776832  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.930301  141746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:32:44.948472  141746 node_ready.go:35] waiting up to 6m0s for node "no-preload-338118" to be "Ready" ...
	I0420 01:32:44.960637  141746 node_ready.go:49] node "no-preload-338118" has status "Ready":"True"
	I0420 01:32:44.960664  141746 node_ready.go:38] duration metric: took 12.15407ms for node "no-preload-338118" to be "Ready" ...
	I0420 01:32:44.960676  141746 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:32:44.971143  141746 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.980894  141746 pod_ready.go:92] pod "etcd-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:44.980917  141746 pod_ready.go:81] duration metric: took 9.749994ms for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.980929  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.995192  141746 pod_ready.go:92] pod "kube-apiserver-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:44.995217  141746 pod_ready.go:81] duration metric: took 14.279681ms for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.995229  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.004302  141746 pod_ready.go:92] pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:45.004324  141746 pod_ready.go:81] duration metric: took 9.086713ms for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.004338  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f57d9" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.062482  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:32:45.066314  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:32:45.066334  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:32:45.093830  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:32:45.148558  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:32:45.148600  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:32:45.235321  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:32:45.235349  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:32:45.275661  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:32:46.686292  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.592425062s)
	I0420 01:32:46.686344  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.623774979s)
	I0420 01:32:46.686360  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686375  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686385  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686401  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686822  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.686897  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.686911  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686920  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686835  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.686839  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687001  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.687013  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.687027  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686850  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.687153  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687166  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.687359  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687373  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.697988  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.422274698s)
	I0420 01:32:46.698045  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.698059  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.698320  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.698339  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.698351  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.698359  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.698568  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.698658  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.698676  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.698687  141746 addons.go:470] Verifying addon metrics-server=true in "no-preload-338118"
	I0420 01:32:46.733170  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.733198  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.733551  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.733573  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.733605  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.735297  141746 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0420 01:32:46.736665  141746 addons.go:505] duration metric: took 2.035625149s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0420 01:32:47.011271  141746 pod_ready.go:92] pod "kube-proxy-f57d9" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:47.011299  141746 pod_ready.go:81] duration metric: took 2.006954798s for pod "kube-proxy-f57d9" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.011309  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.025378  141746 pod_ready.go:92] pod "kube-scheduler-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:47.025408  141746 pod_ready.go:81] duration metric: took 14.090474ms for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.025421  141746 pod_ready.go:38] duration metric: took 2.064731781s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:32:47.025443  141746 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:32:47.025511  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:32:47.052680  141746 api_server.go:72] duration metric: took 2.351656586s to wait for apiserver process to appear ...
	I0420 01:32:47.052712  141746 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:32:47.052738  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:32:47.061908  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 200:
	ok
	I0420 01:32:47.065615  141746 api_server.go:141] control plane version: v1.30.0
	I0420 01:32:47.065641  141746 api_server.go:131] duration metric: took 12.920384ms to wait for apiserver health ...
	I0420 01:32:47.065651  141746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:32:47.158039  141746 system_pods.go:59] 9 kube-system pods found
	I0420 01:32:47.158076  141746 system_pods.go:61] "coredns-7db6d8ff4d-8jvsz" [d83784a0-6942-4906-ba66-76d7fa25dc04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.158087  141746 system_pods.go:61] "coredns-7db6d8ff4d-lhnxg" [c0fb3119-abcb-4646-9aae-a54438a76adf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.158096  141746 system_pods.go:61] "etcd-no-preload-338118" [1ff1cf84-276b-45c4-9da9-8266ee15a4f6] Running
	I0420 01:32:47.158101  141746 system_pods.go:61] "kube-apiserver-no-preload-338118" [313150c1-d21e-43d5-8ae0-6331e5007a66] Running
	I0420 01:32:47.158107  141746 system_pods.go:61] "kube-controller-manager-no-preload-338118" [eef34e56-ed71-4e76-a732-341878f3f90d] Running
	I0420 01:32:47.158113  141746 system_pods.go:61] "kube-proxy-f57d9" [54252f52-9bb1-48a2-98e1-980f40fa727d] Running
	I0420 01:32:47.158117  141746 system_pods.go:61] "kube-scheduler-no-preload-338118" [4491c2f0-7b45-4c78-b91e-8fcbbcc890fd] Running
	I0420 01:32:47.158126  141746 system_pods.go:61] "metrics-server-569cc877fc-xbwdm" [798c7b61-a93d-4daf-a832-e15056a2ae24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:32:47.158134  141746 system_pods.go:61] "storage-provisioner" [51c12418-805f-4923-b7ab-4fa0fe07ec9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:32:47.158147  141746 system_pods.go:74] duration metric: took 92.489697ms to wait for pod list to return data ...
	I0420 01:32:47.158162  141746 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:32:47.351962  141746 default_sa.go:45] found service account: "default"
	I0420 01:32:47.352002  141746 default_sa.go:55] duration metric: took 193.830142ms for default service account to be created ...
	I0420 01:32:47.352016  141746 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:32:47.557471  141746 system_pods.go:86] 9 kube-system pods found
	I0420 01:32:47.557511  141746 system_pods.go:89] "coredns-7db6d8ff4d-8jvsz" [d83784a0-6942-4906-ba66-76d7fa25dc04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.557524  141746 system_pods.go:89] "coredns-7db6d8ff4d-lhnxg" [c0fb3119-abcb-4646-9aae-a54438a76adf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.557534  141746 system_pods.go:89] "etcd-no-preload-338118" [1ff1cf84-276b-45c4-9da9-8266ee15a4f6] Running
	I0420 01:32:47.557540  141746 system_pods.go:89] "kube-apiserver-no-preload-338118" [313150c1-d21e-43d5-8ae0-6331e5007a66] Running
	I0420 01:32:47.557547  141746 system_pods.go:89] "kube-controller-manager-no-preload-338118" [eef34e56-ed71-4e76-a732-341878f3f90d] Running
	I0420 01:32:47.557554  141746 system_pods.go:89] "kube-proxy-f57d9" [54252f52-9bb1-48a2-98e1-980f40fa727d] Running
	I0420 01:32:47.557564  141746 system_pods.go:89] "kube-scheduler-no-preload-338118" [4491c2f0-7b45-4c78-b91e-8fcbbcc890fd] Running
	I0420 01:32:47.557577  141746 system_pods.go:89] "metrics-server-569cc877fc-xbwdm" [798c7b61-a93d-4daf-a832-e15056a2ae24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:32:47.557589  141746 system_pods.go:89] "storage-provisioner" [51c12418-805f-4923-b7ab-4fa0fe07ec9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:32:47.557602  141746 system_pods.go:126] duration metric: took 205.577946ms to wait for k8s-apps to be running ...
	I0420 01:32:47.557615  141746 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:32:47.557674  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:47.577745  141746 system_svc.go:56] duration metric: took 20.111982ms WaitForService to wait for kubelet
	I0420 01:32:47.577774  141746 kubeadm.go:576] duration metric: took 2.876759476s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:32:47.577794  141746 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:32:47.753216  141746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:32:47.753246  141746 node_conditions.go:123] node cpu capacity is 2
	I0420 01:32:47.753257  141746 node_conditions.go:105] duration metric: took 175.457668ms to run NodePressure ...
	I0420 01:32:47.753269  141746 start.go:240] waiting for startup goroutines ...
	I0420 01:32:47.753275  141746 start.go:245] waiting for cluster config update ...
	I0420 01:32:47.753286  141746 start.go:254] writing updated cluster config ...
	I0420 01:32:47.753612  141746 ssh_runner.go:195] Run: rm -f paused
	I0420 01:32:47.804681  141746 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:32:47.806823  141746 out.go:177] * Done! kubectl is now configured to use "no-preload-338118" cluster and "default" namespace by default
	I0420 01:34:20.028550  142411 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:34:20.028769  142411 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0420 01:34:20.030749  142411 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:34:20.030826  142411 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:34:20.030947  142411 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:34:20.031078  142411 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:34:20.031217  142411 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:34:20.031319  142411 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:34:20.032927  142411 out.go:204]   - Generating certificates and keys ...
	I0420 01:34:20.033024  142411 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:34:20.033110  142411 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:34:20.033211  142411 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:34:20.033286  142411 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:34:20.033410  142411 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:34:20.033496  142411 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:34:20.033597  142411 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:34:20.033695  142411 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:34:20.033805  142411 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:34:20.033921  142411 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:34:20.033972  142411 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:34:20.034042  142411 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:34:20.034125  142411 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:34:20.034200  142411 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:34:20.034287  142411 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:34:20.034355  142411 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:34:20.034510  142411 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:34:20.034614  142411 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:34:20.034680  142411 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:34:20.034760  142411 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:34:20.036300  142411 out.go:204]   - Booting up control plane ...
	I0420 01:34:20.036380  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:34:20.036479  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:34:20.036583  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:34:20.036705  142411 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:34:20.036888  142411 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:34:20.036955  142411 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:34:20.037046  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037228  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037291  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037494  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037576  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037730  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037789  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037977  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.038044  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.038262  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.038284  142411 kubeadm.go:309] 
	I0420 01:34:20.038341  142411 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:34:20.038382  142411 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:34:20.038396  142411 kubeadm.go:309] 
	I0420 01:34:20.038443  142411 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:34:20.038476  142411 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:34:20.038612  142411 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:34:20.038625  142411 kubeadm.go:309] 
	I0420 01:34:20.038735  142411 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:34:20.038767  142411 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:34:20.038794  142411 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:34:20.038808  142411 kubeadm.go:309] 
	I0420 01:34:20.038902  142411 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:34:20.038977  142411 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:34:20.038987  142411 kubeadm.go:309] 
	I0420 01:34:20.039101  142411 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:34:20.039203  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:34:20.039274  142411 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:34:20.039342  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:34:20.039384  142411 kubeadm.go:309] 
	I0420 01:34:20.039417  142411 kubeadm.go:393] duration metric: took 8m0.622979268s to StartCluster
	I0420 01:34:20.039459  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:34:20.039514  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:34:20.090236  142411 cri.go:89] found id: ""
	I0420 01:34:20.090262  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.090270  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:34:20.090276  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:34:20.090331  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:34:20.133841  142411 cri.go:89] found id: ""
	I0420 01:34:20.133867  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.133875  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:34:20.133883  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:34:20.133955  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:34:20.176186  142411 cri.go:89] found id: ""
	I0420 01:34:20.176219  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.176230  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:34:20.176235  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:34:20.176295  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:34:20.214895  142411 cri.go:89] found id: ""
	I0420 01:34:20.214932  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.214944  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:34:20.214951  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:34:20.215018  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:34:20.257759  142411 cri.go:89] found id: ""
	I0420 01:34:20.257786  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.257795  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:34:20.257800  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:34:20.257857  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:34:20.298111  142411 cri.go:89] found id: ""
	I0420 01:34:20.298153  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.298164  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:34:20.298172  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:34:20.298226  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:34:20.333435  142411 cri.go:89] found id: ""
	I0420 01:34:20.333469  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.333481  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:34:20.333489  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:34:20.333554  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:34:20.370848  142411 cri.go:89] found id: ""
	I0420 01:34:20.370872  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.370880  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:34:20.370890  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:34:20.370902  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:34:20.425495  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:34:20.425536  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:34:20.442039  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:34:20.442066  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:34:20.523456  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:34:20.523483  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:34:20.523504  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:34:20.633387  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:34:20.633427  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0420 01:34:20.688731  142411 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0420 01:34:20.688783  142411 out.go:239] * 
	W0420 01:34:20.688839  142411 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:34:20.688862  142411 out.go:239] * 
	W0420 01:34:20.689758  142411 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0420 01:34:20.693376  142411 out.go:177] 
	W0420 01:34:20.694909  142411 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:34:20.694971  142411 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0420 01:34:20.695003  142411 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0420 01:34:20.696409  142411 out.go:177] 
	
	
	==> CRI-O <==
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.900366533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577405900335927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=610ed4d0-3903-46c1-923d-c9173ab0fb96 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.901109010Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=248db649-ed32-43de-8728-3244e5e24a3c name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.901170799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=248db649-ed32-43de-8728-3244e5e24a3c name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.901200888Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=248db649-ed32-43de-8728-3244e5e24a3c name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.937464775Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a37e6636-0d04-4b77-8947-34c287eebe5c name=/runtime.v1.RuntimeService/Version
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.937536059Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a37e6636-0d04-4b77-8947-34c287eebe5c name=/runtime.v1.RuntimeService/Version
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.938937492Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5abb1bc0-2dc4-4098-95d6-cb40ed937de5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.939306098Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577405939287553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5abb1bc0-2dc4-4098-95d6-cb40ed937de5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.940077440Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7dcb7d3-028d-4634-a91a-8943067c51c9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.940126041Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7dcb7d3-028d-4634-a91a-8943067c51c9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.940165294Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c7dcb7d3-028d-4634-a91a-8943067c51c9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.972310226Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=de4a44c7-4348-469e-b259-283a3415c9da name=/runtime.v1.RuntimeService/Version
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.972384661Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=de4a44c7-4348-469e-b259-283a3415c9da name=/runtime.v1.RuntimeService/Version
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.973556889Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6f0e4d4-f87d-456e-b92d-04c1b2c1162f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.974081407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577405974055764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6f0e4d4-f87d-456e-b92d-04c1b2c1162f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.974742669Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33530d2d-0489-4646-8071-8435befd72c2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.974793297Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33530d2d-0489-4646-8071-8435befd72c2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:43:25 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:25.974832511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=33530d2d-0489-4646-8071-8435befd72c2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:43:26 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:26.012844710Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f70619fe-d825-47b9-8fdb-a0d0f05cad98 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:43:26 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:26.012999254Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f70619fe-d825-47b9-8fdb-a0d0f05cad98 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:43:26 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:26.014272406Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad972858-276c-4001-b81f-8ad915b7789b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:43:26 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:26.014641071Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577406014622498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad972858-276c-4001-b81f-8ad915b7789b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:43:26 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:26.015159183Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60e4561b-a589-4579-bfbd-d13be75d76ed name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:43:26 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:26.015213295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60e4561b-a589-4579-bfbd-d13be75d76ed name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:43:26 old-k8s-version-564860 crio[649]: time="2024-04-20 01:43:26.015248710Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=60e4561b-a589-4579-bfbd-d13be75d76ed name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr20 01:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057920] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044405] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.872024] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.695018] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Apr20 01:26] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.212298] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.068714] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074791] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.229235] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.132751] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.310058] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +7.263827] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.070157] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.032326] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[  +8.834390] kauditd_printk_skb: 46 callbacks suppressed
	[Apr20 01:30] systemd-fstab-generator[5001]: Ignoring "noauto" option for root device
	[Apr20 01:32] systemd-fstab-generator[5277]: Ignoring "noauto" option for root device
	[  +0.067931] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:43:26 up 17 min,  0 users,  load average: 0.00, 0.07, 0.08
	Linux old-k8s-version-564860 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6464]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000bd4f00, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00090a480, 0x24, 0x1000000000060, 0x7f6a3883e998, 0x118, ...)
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6464]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6464]: net/http.(*Transport).dial(0xc0006cec80, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00090a480, 0x24, 0x0, 0x0, 0x4f0b860, ...)
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6464]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6464]: net/http.(*Transport).dialConn(0xc0006cec80, 0x4f7fe00, 0xc000052030, 0x0, 0xc0003fa420, 0x5, 0xc00090a480, 0x24, 0x0, 0xc000ca2000, ...)
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6464]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6464]: net/http.(*Transport).dialConnFor(0xc0006cec80, 0xc0004d0370)
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6464]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6464]: created by net/http.(*Transport).queueForDial
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6464]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6464]: goroutine 165 [select]:
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6464]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000d086c0, 0xc000d36080, 0xc0003fa900, 0xc0003fa8a0)
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6464]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6464]: created by net.(*netFD).connect
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6464]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Apr 20 01:43:24 old-k8s-version-564860 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 20 01:43:24 old-k8s-version-564860 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 20 01:43:24 old-k8s-version-564860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Apr 20 01:43:24 old-k8s-version-564860 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 20 01:43:24 old-k8s-version-564860 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6492]: I0420 01:43:24.902259    6492 server.go:416] Version: v1.20.0
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6492]: I0420 01:43:24.902567    6492 server.go:837] Client rotation is on, will bootstrap in background
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6492]: I0420 01:43:24.904962    6492 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6492]: W0420 01:43:24.906552    6492 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 20 01:43:24 old-k8s-version-564860 kubelet[6492]: I0420 01:43:24.906788    6492 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-564860 -n old-k8s-version-564860
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-564860 -n old-k8s-version-564860: exit status 2 (275.593013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-564860" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (412.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-907988 -n default-k8s-diff-port-907988
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-20 01:46:42.993931423 +0000 UTC m=+6576.508798817
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-907988 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-907988 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.927µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-907988 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-907988 -n default-k8s-diff-port-907988
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-907988 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-907988 logs -n 25: (1.285501726s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-269507            | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC | 20 Apr 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-564860        | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-338118                  | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-907988       | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:30 UTC |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-269507                 | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-564860             | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:44 UTC | 20 Apr 24 01:44 UTC |
	| start   | -p newest-cni-776287 --memory=2200 --alsologtostderr   | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:44 UTC | 20 Apr 24 01:45 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:45 UTC | 20 Apr 24 01:45 UTC |
	| addons  | enable metrics-server -p newest-cni-776287             | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:45 UTC | 20 Apr 24 01:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-776287                                   | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:45 UTC | 20 Apr 24 01:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-776287                  | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:45 UTC | 20 Apr 24 01:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-776287 --memory=2200 --alsologtostderr   | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:45 UTC | 20 Apr 24 01:46 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-776287 image list                           | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:46 UTC | 20 Apr 24 01:46 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-776287                                   | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:46 UTC | 20 Apr 24 01:46 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-776287                                   | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:46 UTC | 20 Apr 24 01:46 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:46 UTC | 20 Apr 24 01:46 UTC |
	| delete  | -p newest-cni-776287                                   | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:46 UTC | 20 Apr 24 01:46 UTC |
	| delete  | -p newest-cni-776287                                   | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:46 UTC | 20 Apr 24 01:46 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 01:45:45
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 01:45:45.545747  149242 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:45:45.545845  149242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:45:45.545853  149242 out.go:304] Setting ErrFile to fd 2...
	I0420 01:45:45.545857  149242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:45:45.546037  149242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:45:45.546584  149242 out.go:298] Setting JSON to false
	I0420 01:45:45.547521  149242 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":16093,"bootTime":1713561453,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 01:45:45.547581  149242 start.go:139] virtualization: kvm guest
	I0420 01:45:45.549970  149242 out.go:177] * [newest-cni-776287] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 01:45:45.551389  149242 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:45:45.551383  149242 notify.go:220] Checking for updates...
	I0420 01:45:45.552706  149242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:45:45.553997  149242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:45:45.555166  149242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:45:45.556326  149242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 01:45:45.557519  149242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:45:45.559314  149242 config.go:182] Loaded profile config "newest-cni-776287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:45:45.559976  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:45:45.560043  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:45:45.575366  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36161
	I0420 01:45:45.575794  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:45:45.576293  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:45:45.576313  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:45:45.576699  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:45:45.576907  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:45:45.577168  149242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:45:45.577527  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:45:45.577570  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:45:45.593011  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I0420 01:45:45.593360  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:45:45.593792  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:45:45.593811  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:45:45.594122  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:45:45.594334  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:45:45.630879  149242 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 01:45:45.632102  149242 start.go:297] selected driver: kvm2
	I0420 01:45:45.632116  149242 start.go:901] validating driver "kvm2" against &{Name:newest-cni-776287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-776287 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.191 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[
] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:45:45.632239  149242 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:45:45.632875  149242 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:45:45.632950  149242 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 01:45:45.647874  149242 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 01:45:45.648357  149242 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0420 01:45:45.648413  149242 cni.go:84] Creating CNI manager for ""
	I0420 01:45:45.648425  149242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:45:45.648465  149242 start.go:340] cluster config:
	{Name:newest-cni-776287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-776287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.191 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:45:45.648580  149242 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:45:45.650367  149242 out.go:177] * Starting "newest-cni-776287" primary control-plane node in "newest-cni-776287" cluster
	I0420 01:45:45.651618  149242 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:45:45.651664  149242 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0420 01:45:45.651674  149242 cache.go:56] Caching tarball of preloaded images
	I0420 01:45:45.651740  149242 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 01:45:45.651751  149242 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 01:45:45.652181  149242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/config.json ...
	I0420 01:45:45.652446  149242 start.go:360] acquireMachinesLock for newest-cni-776287: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:45:45.652491  149242 start.go:364] duration metric: took 26.274µs to acquireMachinesLock for "newest-cni-776287"
	I0420 01:45:45.652504  149242 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:45:45.652513  149242 fix.go:54] fixHost starting: 
	I0420 01:45:45.653067  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:45:45.653107  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:45:45.666896  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45041
	I0420 01:45:45.667282  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:45:45.667763  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:45:45.667782  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:45:45.668106  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:45:45.668270  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:45:45.668375  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetState
	I0420 01:45:45.670080  149242 fix.go:112] recreateIfNeeded on newest-cni-776287: state=Stopped err=<nil>
	I0420 01:45:45.670105  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	W0420 01:45:45.670276  149242 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:45:45.672028  149242 out.go:177] * Restarting existing kvm2 VM for "newest-cni-776287" ...
	I0420 01:45:45.673276  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Start
	I0420 01:45:45.673458  149242 main.go:141] libmachine: (newest-cni-776287) Ensuring networks are active...
	I0420 01:45:45.674242  149242 main.go:141] libmachine: (newest-cni-776287) Ensuring network default is active
	I0420 01:45:45.674616  149242 main.go:141] libmachine: (newest-cni-776287) Ensuring network mk-newest-cni-776287 is active
	I0420 01:45:45.674909  149242 main.go:141] libmachine: (newest-cni-776287) Getting domain xml...
	I0420 01:45:45.675567  149242 main.go:141] libmachine: (newest-cni-776287) Creating domain...
	I0420 01:45:46.878253  149242 main.go:141] libmachine: (newest-cni-776287) Waiting to get IP...
	I0420 01:45:46.879119  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:46.879598  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:46.879665  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:46.879560  149277 retry.go:31] will retry after 238.242433ms: waiting for machine to come up
	I0420 01:45:47.119199  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:47.119788  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:47.119815  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:47.119741  149277 retry.go:31] will retry after 241.219006ms: waiting for machine to come up
	I0420 01:45:47.362225  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:47.362712  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:47.362736  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:47.362657  149277 retry.go:31] will retry after 382.193297ms: waiting for machine to come up
	I0420 01:45:47.745943  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:47.746450  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:47.746478  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:47.746406  149277 retry.go:31] will retry after 452.25947ms: waiting for machine to come up
	I0420 01:45:48.200226  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:48.200700  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:48.200722  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:48.200650  149277 retry.go:31] will retry after 483.119811ms: waiting for machine to come up
	I0420 01:45:48.685397  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:48.685950  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:48.685990  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:48.685923  149277 retry.go:31] will retry after 760.841312ms: waiting for machine to come up
	I0420 01:45:49.448068  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:49.448533  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:49.448562  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:49.448479  149277 retry.go:31] will retry after 1.003742184s: waiting for machine to come up
	I0420 01:45:50.453596  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:50.454049  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:50.454077  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:50.454016  149277 retry.go:31] will retry after 1.167943095s: waiting for machine to come up
	I0420 01:45:51.623572  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:51.624052  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:51.624084  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:51.624000  149277 retry.go:31] will retry after 1.860901587s: waiting for machine to come up
	I0420 01:45:53.486439  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:53.486887  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:53.486921  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:53.486855  149277 retry.go:31] will retry after 2.19188582s: waiting for machine to come up
	I0420 01:45:55.680620  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:55.681068  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:55.681125  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:55.681050  149277 retry.go:31] will retry after 2.67498922s: waiting for machine to come up
	I0420 01:45:58.358618  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:58.359132  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:58.359164  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:58.359082  149277 retry.go:31] will retry after 3.197223234s: waiting for machine to come up
	I0420 01:46:01.557512  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:01.557968  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:46:01.557997  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:46:01.557931  149277 retry.go:31] will retry after 3.39301121s: waiting for machine to come up
	I0420 01:46:04.954477  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:04.955146  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has current primary IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:04.955180  149242 main.go:141] libmachine: (newest-cni-776287) Found IP for machine: 192.168.61.191
	I0420 01:46:04.955220  149242 main.go:141] libmachine: (newest-cni-776287) Reserving static IP address...
	I0420 01:46:04.955593  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "newest-cni-776287", mac: "52:54:00:e3:cd:b1", ip: "192.168.61.191"} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:04.955614  149242 main.go:141] libmachine: (newest-cni-776287) DBG | skip adding static IP to network mk-newest-cni-776287 - found existing host DHCP lease matching {name: "newest-cni-776287", mac: "52:54:00:e3:cd:b1", ip: "192.168.61.191"}
	I0420 01:46:04.955636  149242 main.go:141] libmachine: (newest-cni-776287) Reserved static IP address: 192.168.61.191
	I0420 01:46:04.955652  149242 main.go:141] libmachine: (newest-cni-776287) DBG | Getting to WaitForSSH function...
	I0420 01:46:04.955670  149242 main.go:141] libmachine: (newest-cni-776287) Waiting for SSH to be available...
	I0420 01:46:04.957798  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:04.958134  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:04.958169  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:04.958234  149242 main.go:141] libmachine: (newest-cni-776287) DBG | Using SSH client type: external
	I0420 01:46:04.958265  149242 main.go:141] libmachine: (newest-cni-776287) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa (-rw-------)
	I0420 01:46:04.958308  149242 main.go:141] libmachine: (newest-cni-776287) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.191 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:46:04.958321  149242 main.go:141] libmachine: (newest-cni-776287) DBG | About to run SSH command:
	I0420 01:46:04.958330  149242 main.go:141] libmachine: (newest-cni-776287) DBG | exit 0
	I0420 01:46:05.085729  149242 main.go:141] libmachine: (newest-cni-776287) DBG | SSH cmd err, output: <nil>: 
	I0420 01:46:05.086153  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetConfigRaw
	I0420 01:46:05.086827  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetIP
	I0420 01:46:05.089453  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.089787  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:05.089812  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.090062  149242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/config.json ...
	I0420 01:46:05.090256  149242 machine.go:94] provisionDockerMachine start ...
	I0420 01:46:05.090276  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:05.090494  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:05.092812  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.093129  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:05.093158  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.093269  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:05.093482  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:05.093669  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:05.093797  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:05.093975  149242 main.go:141] libmachine: Using SSH client type: native
	I0420 01:46:05.094206  149242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I0420 01:46:05.094221  149242 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:46:05.211572  149242 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:46:05.211603  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetMachineName
	I0420 01:46:05.211866  149242 buildroot.go:166] provisioning hostname "newest-cni-776287"
	I0420 01:46:05.211900  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetMachineName
	I0420 01:46:05.212137  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:05.215162  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.215517  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:05.215560  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.215629  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:05.215853  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:05.216085  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:05.216281  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:05.216465  149242 main.go:141] libmachine: Using SSH client type: native
	I0420 01:46:05.216638  149242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I0420 01:46:05.216651  149242 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-776287 && echo "newest-cni-776287" | sudo tee /etc/hostname
	I0420 01:46:05.351876  149242 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-776287
	
	I0420 01:46:05.351906  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:05.355253  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.355687  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:05.355710  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.355948  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:05.356151  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:05.356299  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:05.356445  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:05.356634  149242 main.go:141] libmachine: Using SSH client type: native
	I0420 01:46:05.356882  149242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I0420 01:46:05.356912  149242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-776287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-776287/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-776287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:46:05.485228  149242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:46:05.485268  149242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:46:05.485296  149242 buildroot.go:174] setting up certificates
	I0420 01:46:05.485328  149242 provision.go:84] configureAuth start
	I0420 01:46:05.485344  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetMachineName
	I0420 01:46:05.485668  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetIP
	I0420 01:46:05.488280  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.488695  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:05.488726  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.488861  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:05.491013  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.491321  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:05.491347  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.491499  149242 provision.go:143] copyHostCerts
	I0420 01:46:05.491554  149242 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:46:05.491564  149242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:46:05.491636  149242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:46:05.491749  149242 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:46:05.491759  149242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:46:05.491787  149242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:46:05.491854  149242 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:46:05.491862  149242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:46:05.491893  149242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:46:05.491951  149242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.newest-cni-776287 san=[127.0.0.1 192.168.61.191 localhost minikube newest-cni-776287]
	I0420 01:46:05.875486  149242 provision.go:177] copyRemoteCerts
	I0420 01:46:05.875548  149242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:46:05.875578  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:05.878259  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.878640  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:05.878671  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.878844  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:05.879040  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:05.879206  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:05.879311  149242 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:46:05.964503  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:46:05.992896  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0420 01:46:06.019923  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:46:06.048801  149242 provision.go:87] duration metric: took 563.457283ms to configureAuth
	I0420 01:46:06.048837  149242 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:46:06.049061  149242 config.go:182] Loaded profile config "newest-cni-776287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:46:06.049162  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:06.051954  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.052309  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:06.052351  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.052550  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:06.052772  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:06.052947  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:06.053125  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:06.053294  149242 main.go:141] libmachine: Using SSH client type: native
	I0420 01:46:06.053533  149242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I0420 01:46:06.053557  149242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:46:06.357790  149242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:46:06.357822  149242 machine.go:97] duration metric: took 1.267552277s to provisionDockerMachine
	I0420 01:46:06.357834  149242 start.go:293] postStartSetup for "newest-cni-776287" (driver="kvm2")
	I0420 01:46:06.357845  149242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:46:06.357867  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:06.358265  149242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:46:06.358304  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:06.361147  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.361545  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:06.361574  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.361730  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:06.361926  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:06.362108  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:06.362280  149242 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:46:06.449156  149242 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:46:06.454234  149242 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:46:06.454259  149242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:46:06.454345  149242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:46:06.454451  149242 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:46:06.454573  149242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:46:06.465046  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:46:06.495003  149242 start.go:296] duration metric: took 137.154256ms for postStartSetup
	I0420 01:46:06.495041  149242 fix.go:56] duration metric: took 20.842527537s for fixHost
	I0420 01:46:06.495066  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:06.497825  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.498247  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:06.498274  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.498461  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:06.498673  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:06.498860  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:06.499063  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:06.499221  149242 main.go:141] libmachine: Using SSH client type: native
	I0420 01:46:06.499406  149242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I0420 01:46:06.499418  149242 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:46:06.614980  149242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713577566.594337317
	
	I0420 01:46:06.615002  149242 fix.go:216] guest clock: 1713577566.594337317
	I0420 01:46:06.615010  149242 fix.go:229] Guest: 2024-04-20 01:46:06.594337317 +0000 UTC Remote: 2024-04-20 01:46:06.495044536 +0000 UTC m=+20.996820664 (delta=99.292781ms)
	I0420 01:46:06.615029  149242 fix.go:200] guest clock delta is within tolerance: 99.292781ms
	I0420 01:46:06.615033  149242 start.go:83] releasing machines lock for "newest-cni-776287", held for 20.962535545s
	I0420 01:46:06.615051  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:06.615325  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetIP
	I0420 01:46:06.618179  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.618550  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:06.618587  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.618707  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:06.619305  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:06.619480  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:06.619619  149242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:46:06.619698  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:06.619719  149242 ssh_runner.go:195] Run: cat /version.json
	I0420 01:46:06.619735  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:06.622513  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.622536  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.622957  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:06.622997  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:06.623024  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.623066  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.623177  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:06.623350  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:06.623403  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:06.623549  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:06.623553  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:06.623735  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:06.623733  149242 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:46:06.623902  149242 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:46:06.706725  149242 ssh_runner.go:195] Run: systemctl --version
	I0420 01:46:06.728359  149242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:46:06.870696  149242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:46:06.878547  149242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:46:06.878631  149242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:46:06.897121  149242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:46:06.897148  149242 start.go:494] detecting cgroup driver to use...
	I0420 01:46:06.897201  149242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:46:06.915872  149242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:46:06.932406  149242 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:46:06.932471  149242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:46:06.947748  149242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:46:06.966622  149242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:46:07.110766  149242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:46:07.278034  149242 docker.go:233] disabling docker service ...
	I0420 01:46:07.278104  149242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:46:07.296001  149242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:46:07.311682  149242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:46:07.474546  149242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:46:07.611807  149242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:46:07.628597  149242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:46:07.649520  149242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:46:07.649589  149242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:46:07.661780  149242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:46:07.661862  149242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:46:07.674043  149242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:46:07.686222  149242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:46:07.698285  149242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:46:07.710717  149242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:46:07.723422  149242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:46:07.741910  149242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:46:07.753719  149242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:46:07.764549  149242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:46:07.764626  149242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:46:07.780430  149242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:46:07.791658  149242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:46:07.917090  149242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:46:08.069930  149242 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:46:08.070015  149242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:46:08.076068  149242 start.go:562] Will wait 60s for crictl version
	I0420 01:46:08.076130  149242 ssh_runner.go:195] Run: which crictl
	I0420 01:46:08.080509  149242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:46:08.128220  149242 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:46:08.128317  149242 ssh_runner.go:195] Run: crio --version
	I0420 01:46:08.161201  149242 ssh_runner.go:195] Run: crio --version
	I0420 01:46:08.195132  149242 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:46:08.196382  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetIP
	I0420 01:46:08.199186  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:08.199633  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:08.199656  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:08.199933  149242 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0420 01:46:08.205344  149242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:46:08.221288  149242 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0420 01:46:08.222637  149242 kubeadm.go:877] updating cluster {Name:newest-cni-776287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-776287 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.191 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress
: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:46:08.222760  149242 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:46:08.222823  149242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:46:08.264215  149242 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:46:08.264296  149242 ssh_runner.go:195] Run: which lz4
	I0420 01:46:08.269386  149242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:46:08.274606  149242 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:46:08.274647  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:46:10.053160  149242 crio.go:462] duration metric: took 1.78380931s to copy over tarball
	I0420 01:46:10.053243  149242 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:46:12.669769  149242 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.616493997s)
	I0420 01:46:12.669814  149242 crio.go:469] duration metric: took 2.616625212s to extract the tarball
	I0420 01:46:12.669823  149242 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:46:12.711292  149242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:46:12.768241  149242 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:46:12.768270  149242 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:46:12.768281  149242 kubeadm.go:928] updating node { 192.168.61.191 8443 v1.30.0 crio true true} ...
	I0420 01:46:12.768495  149242 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-776287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:newest-cni-776287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:46:12.768570  149242 ssh_runner.go:195] Run: crio config
	I0420 01:46:12.825723  149242 cni.go:84] Creating CNI manager for ""
	I0420 01:46:12.825746  149242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:46:12.825761  149242 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0420 01:46:12.825785  149242 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.191 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-776287 NodeName:newest-cni-776287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.61.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:46:12.825962  149242 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-776287"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.191
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:46:12.826064  149242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:46:12.837359  149242 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:46:12.837432  149242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:46:12.847688  149242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0420 01:46:12.866105  149242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:46:12.884770  149242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0420 01:46:12.904337  149242 ssh_runner.go:195] Run: grep 192.168.61.191	control-plane.minikube.internal$ /etc/hosts
	I0420 01:46:12.908708  149242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.191	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:46:12.922093  149242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:46:13.063277  149242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:46:13.084324  149242 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287 for IP: 192.168.61.191
	I0420 01:46:13.084351  149242 certs.go:194] generating shared ca certs ...
	I0420 01:46:13.084403  149242 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:46:13.084627  149242 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:46:13.084702  149242 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:46:13.084719  149242 certs.go:256] generating profile certs ...
	I0420 01:46:13.084827  149242 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/client.key
	I0420 01:46:13.084905  149242 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.key.e52dbc46
	I0420 01:46:13.084958  149242 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/proxy-client.key
	I0420 01:46:13.085071  149242 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:46:13.085114  149242 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:46:13.085128  149242 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:46:13.085158  149242 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:46:13.085196  149242 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:46:13.085236  149242 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:46:13.085296  149242 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:46:13.086292  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:46:13.131380  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:46:13.178504  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:46:13.222559  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:46:13.266862  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0420 01:46:13.304248  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0420 01:46:13.335900  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:46:13.368532  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:46:13.397073  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:46:13.425107  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:46:13.452826  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:46:13.479851  149242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:46:13.499455  149242 ssh_runner.go:195] Run: openssl version
	I0420 01:46:13.506096  149242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:46:13.518557  149242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:46:13.524133  149242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:46:13.524200  149242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:46:13.530388  149242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:46:13.542816  149242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:46:13.555635  149242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:46:13.561072  149242 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:46:13.561143  149242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:46:13.567887  149242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:46:13.580371  149242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:46:13.592643  149242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:46:13.598085  149242 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:46:13.598148  149242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:46:13.604865  149242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:46:13.616781  149242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:46:13.622290  149242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:46:13.629892  149242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:46:13.636564  149242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:46:13.643826  149242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:46:13.650398  149242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:46:13.657707  149242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:46:13.664262  149242 kubeadm.go:391] StartCluster: {Name:newest-cni-776287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-776287 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.191 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: N
etwork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:46:13.664346  149242 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:46:13.664399  149242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:46:13.707757  149242 cri.go:89] found id: ""
	I0420 01:46:13.707849  149242 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:46:13.718926  149242 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:46:13.718973  149242 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:46:13.718987  149242 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:46:13.719070  149242 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:46:13.731007  149242 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:46:13.737951  149242 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-776287" does not appear in /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:46:13.738591  149242 kubeconfig.go:62] /home/jenkins/minikube-integration/18703-76456/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-776287" cluster setting kubeconfig missing "newest-cni-776287" context setting]
	I0420 01:46:13.739488  149242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:46:13.820710  149242 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:46:13.832524  149242 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.191
	I0420 01:46:13.832571  149242 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:46:13.832583  149242 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:46:13.832652  149242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:46:13.879863  149242 cri.go:89] found id: ""
	I0420 01:46:13.879973  149242 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:46:13.900869  149242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:46:13.914445  149242 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:46:13.914470  149242 kubeadm.go:156] found existing configuration files:
	
	I0420 01:46:13.914523  149242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:46:13.924942  149242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:46:13.924995  149242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:46:13.935703  149242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:46:13.947364  149242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:46:13.947429  149242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:46:13.957830  149242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:46:13.967527  149242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:46:13.967595  149242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:46:13.977980  149242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:46:13.987638  149242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:46:13.987683  149242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:46:13.997676  149242 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:46:14.007968  149242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:46:14.139667  149242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:46:15.195874  149242 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.056164067s)
	I0420 01:46:15.195909  149242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:46:15.414638  149242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:46:15.487388  149242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:46:15.573436  149242 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:46:15.573512  149242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:46:16.074562  149242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:46:16.574049  149242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:46:16.630963  149242 api_server.go:72] duration metric: took 1.0575267s to wait for apiserver process to appear ...
	I0420 01:46:16.630998  149242 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:46:16.631019  149242 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I0420 01:46:16.631562  149242 api_server.go:269] stopped: https://192.168.61.191:8443/healthz: Get "https://192.168.61.191:8443/healthz": dial tcp 192.168.61.191:8443: connect: connection refused
	I0420 01:46:17.131088  149242 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I0420 01:46:19.567510  149242 api_server.go:279] https://192.168.61.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:46:19.567548  149242 api_server.go:103] status: https://192.168.61.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:46:19.567564  149242 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I0420 01:46:19.590373  149242 api_server.go:279] https://192.168.61.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:46:19.590399  149242 api_server.go:103] status: https://192.168.61.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:46:19.631651  149242 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I0420 01:46:19.647816  149242 api_server.go:279] https://192.168.61.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:46:19.647856  149242 api_server.go:103] status: https://192.168.61.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:46:20.131374  149242 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I0420 01:46:20.137384  149242 api_server.go:279] https://192.168.61.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:46:20.137411  149242 api_server.go:103] status: https://192.168.61.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:46:20.631082  149242 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I0420 01:46:20.640612  149242 api_server.go:279] https://192.168.61.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:46:20.640640  149242 api_server.go:103] status: https://192.168.61.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:46:21.131774  149242 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I0420 01:46:21.140058  149242 api_server.go:279] https://192.168.61.191:8443/healthz returned 200:
	ok
	I0420 01:46:21.168333  149242 api_server.go:141] control plane version: v1.30.0
	I0420 01:46:21.168361  149242 api_server.go:131] duration metric: took 4.537355807s to wait for apiserver health ...
	I0420 01:46:21.168371  149242 cni.go:84] Creating CNI manager for ""
	I0420 01:46:21.168377  149242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:46:21.170195  149242 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:46:21.171575  149242 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:46:21.195417  149242 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:46:21.255917  149242 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:46:21.274060  149242 system_pods.go:59] 8 kube-system pods found
	I0420 01:46:21.274109  149242 system_pods.go:61] "coredns-7db6d8ff4d-s79q5" [1e743f7e-a708-49e6-97fc-772bfb86bd1c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:46:21.274125  149242 system_pods.go:61] "etcd-newest-cni-776287" [da504341-0d60-43a1-aa84-2c6f9f8ad005] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:46:21.274135  149242 system_pods.go:61] "kube-apiserver-newest-cni-776287" [723f9cc0-666c-43d9-abc9-d32948a2847b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:46:21.274151  149242 system_pods.go:61] "kube-controller-manager-newest-cni-776287" [9edc1ff3-4f5c-4c86-93bf-ead0cf5d81c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:46:21.274161  149242 system_pods.go:61] "kube-proxy-bdmnr" [8ab3ad83-4e89-4871-bae6-eadf6611e259] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0420 01:46:21.274171  149242 system_pods.go:61] "kube-scheduler-newest-cni-776287" [cbe24b8c-a717-4d02-85c5-2b4c4c66914b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0420 01:46:21.274181  149242 system_pods.go:61] "metrics-server-569cc877fc-m42jf" [9840799c-5af6-4143-8531-65fc3bf48118] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:46:21.274191  149242 system_pods.go:61] "storage-provisioner" [d3bd0842-64d3-4df8-b59e-270ca31e20ac] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:46:21.274203  149242 system_pods.go:74] duration metric: took 18.258555ms to wait for pod list to return data ...
	I0420 01:46:21.274216  149242 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:46:21.280419  149242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:46:21.280453  149242 node_conditions.go:123] node cpu capacity is 2
	I0420 01:46:21.280465  149242 node_conditions.go:105] duration metric: took 6.240264ms to run NodePressure ...
	I0420 01:46:21.280490  149242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:46:21.716213  149242 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:46:21.733026  149242 ops.go:34] apiserver oom_adj: -16
	I0420 01:46:21.733052  149242 kubeadm.go:591] duration metric: took 8.014057015s to restartPrimaryControlPlane
	I0420 01:46:21.733063  149242 kubeadm.go:393] duration metric: took 8.068806353s to StartCluster
	I0420 01:46:21.733084  149242 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:46:21.733172  149242 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:46:21.735155  149242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:46:21.735485  149242 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.191 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:46:21.737241  149242 out.go:177] * Verifying Kubernetes components...
	I0420 01:46:21.735565  149242 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:46:21.735766  149242 config.go:182] Loaded profile config "newest-cni-776287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:46:21.738588  149242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:46:21.738616  149242 addons.go:69] Setting default-storageclass=true in profile "newest-cni-776287"
	I0420 01:46:21.738637  149242 addons.go:69] Setting metrics-server=true in profile "newest-cni-776287"
	I0420 01:46:21.738662  149242 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-776287"
	I0420 01:46:21.738664  149242 addons.go:69] Setting dashboard=true in profile "newest-cni-776287"
	I0420 01:46:21.738622  149242 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-776287"
	I0420 01:46:21.738689  149242 addons.go:234] Setting addon dashboard=true in "newest-cni-776287"
	I0420 01:46:21.738689  149242 addons.go:234] Setting addon metrics-server=true in "newest-cni-776287"
	W0420 01:46:21.738697  149242 addons.go:243] addon dashboard should already be in state true
	W0420 01:46:21.738701  149242 addons.go:243] addon metrics-server should already be in state true
	I0420 01:46:21.738716  149242 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-776287"
	I0420 01:46:21.738726  149242 host.go:66] Checking if "newest-cni-776287" exists ...
	W0420 01:46:21.738730  149242 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:46:21.738740  149242 host.go:66] Checking if "newest-cni-776287" exists ...
	I0420 01:46:21.738768  149242 host.go:66] Checking if "newest-cni-776287" exists ...
	I0420 01:46:21.738998  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.739047  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.739130  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.739155  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.739165  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.739172  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.739177  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.739199  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.756500  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43197
	I0420 01:46:21.756504  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35407
	I0420 01:46:21.757346  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.757366  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.757351  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0420 01:46:21.757949  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.757950  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.757986  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39233
	I0420 01:46:21.757970  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.758001  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.757992  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.758405  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.758471  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.758606  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.758618  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.759073  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.759101  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.759113  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.759147  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.759319  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.759398  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.759839  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.759864  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.760132  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.760180  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.760357  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.760711  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetState
	I0420 01:46:21.764481  149242 addons.go:234] Setting addon default-storageclass=true in "newest-cni-776287"
	W0420 01:46:21.764503  149242 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:46:21.764538  149242 host.go:66] Checking if "newest-cni-776287" exists ...
	I0420 01:46:21.764893  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.764930  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.779043  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41679
	I0420 01:46:21.779386  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
	I0420 01:46:21.779527  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.780138  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.780163  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.780187  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.780667  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.780690  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.780705  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.781040  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetState
	I0420 01:46:21.781088  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.781257  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0420 01:46:21.781425  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42979
	I0420 01:46:21.781438  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetState
	I0420 01:46:21.781835  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.781923  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.782511  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.782537  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.783197  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.783236  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:21.783280  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.783318  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.785443  149242 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0420 01:46:21.783771  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.783851  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.785570  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.784344  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:21.786063  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetState
	I0420 01:46:21.789207  149242 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:46:21.787620  149242 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0420 01:46:21.789813  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:21.790628  149242 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:46:21.790644  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:46:21.790659  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:21.792211  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0420 01:46:21.792226  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0420 01:46:21.792241  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:21.794092  149242 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:46:21.795793  149242 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:46:21.795812  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:46:21.795828  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:21.794064  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:21.795899  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:21.795921  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:21.795034  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:21.795542  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:21.796176  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:21.796263  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:21.796288  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:21.796300  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:21.796490  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:21.796534  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:21.796797  149242 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:46:21.797099  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:21.797275  149242 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:46:21.798665  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:21.799026  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:21.799058  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:21.799185  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:21.799336  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:21.799488  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:21.799629  149242 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:46:21.803998  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45313
	I0420 01:46:21.804419  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.804887  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.804900  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.805265  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.805455  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetState
	I0420 01:46:21.806787  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:21.807075  149242 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:46:21.807112  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:46:21.807130  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:21.809421  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:21.809654  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:21.809683  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:21.809840  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:21.810030  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:21.810153  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:21.810302  149242 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:46:21.986819  149242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:46:22.009563  149242 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:46:22.009651  149242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:46:22.026657  149242 api_server.go:72] duration metric: took 291.126793ms to wait for apiserver process to appear ...
	I0420 01:46:22.026687  149242 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:46:22.026708  149242 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I0420 01:46:22.033480  149242 api_server.go:279] https://192.168.61.191:8443/healthz returned 200:
	ok
	I0420 01:46:22.034801  149242 api_server.go:141] control plane version: v1.30.0
	I0420 01:46:22.034821  149242 api_server.go:131] duration metric: took 8.126943ms to wait for apiserver health ...
	I0420 01:46:22.034829  149242 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:46:22.045680  149242 system_pods.go:59] 8 kube-system pods found
	I0420 01:46:22.045713  149242 system_pods.go:61] "coredns-7db6d8ff4d-s79q5" [1e743f7e-a708-49e6-97fc-772bfb86bd1c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:46:22.045723  149242 system_pods.go:61] "etcd-newest-cni-776287" [da504341-0d60-43a1-aa84-2c6f9f8ad005] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:46:22.045735  149242 system_pods.go:61] "kube-apiserver-newest-cni-776287" [723f9cc0-666c-43d9-abc9-d32948a2847b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:46:22.045751  149242 system_pods.go:61] "kube-controller-manager-newest-cni-776287" [9edc1ff3-4f5c-4c86-93bf-ead0cf5d81c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:46:22.045758  149242 system_pods.go:61] "kube-proxy-bdmnr" [8ab3ad83-4e89-4871-bae6-eadf6611e259] Running
	I0420 01:46:22.045770  149242 system_pods.go:61] "kube-scheduler-newest-cni-776287" [cbe24b8c-a717-4d02-85c5-2b4c4c66914b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0420 01:46:22.045777  149242 system_pods.go:61] "metrics-server-569cc877fc-m42jf" [9840799c-5af6-4143-8531-65fc3bf48118] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:46:22.045781  149242 system_pods.go:61] "storage-provisioner" [d3bd0842-64d3-4df8-b59e-270ca31e20ac] Running
	I0420 01:46:22.045787  149242 system_pods.go:74] duration metric: took 10.952797ms to wait for pod list to return data ...
	I0420 01:46:22.045796  149242 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:46:22.048136  149242 default_sa.go:45] found service account: "default"
	I0420 01:46:22.048158  149242 default_sa.go:55] duration metric: took 2.353757ms for default service account to be created ...
	I0420 01:46:22.048172  149242 kubeadm.go:576] duration metric: took 312.646021ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0420 01:46:22.048198  149242 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:46:22.050374  149242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:46:22.050397  149242 node_conditions.go:123] node cpu capacity is 2
	I0420 01:46:22.050408  149242 node_conditions.go:105] duration metric: took 2.201022ms to run NodePressure ...
	I0420 01:46:22.050422  149242 start.go:240] waiting for startup goroutines ...
	I0420 01:46:22.066658  149242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:46:22.128303  149242 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:46:22.128325  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:46:22.171786  149242 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:46:22.171816  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:46:22.191227  149242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:46:22.212912  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0420 01:46:22.212940  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0420 01:46:22.246026  149242 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:46:22.246053  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:46:22.283090  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0420 01:46:22.283124  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0420 01:46:22.315938  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0420 01:46:22.315968  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0420 01:46:22.343205  149242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:46:22.381372  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0420 01:46:22.381400  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0420 01:46:22.423120  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0420 01:46:22.423157  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0420 01:46:22.444409  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:22.444444  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:22.444750  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:22.444776  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:22.444785  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:22.444794  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:22.445056  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:22.445075  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:22.445097  149242 main.go:141] libmachine: (newest-cni-776287) DBG | Closing plugin on server side
	I0420 01:46:22.457606  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:22.457633  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:22.457917  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:22.457935  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:22.478412  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0420 01:46:22.478433  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0420 01:46:22.521933  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0420 01:46:22.521962  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0420 01:46:22.583580  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0420 01:46:22.583609  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0420 01:46:22.641064  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0420 01:46:22.641092  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0420 01:46:22.676894  149242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0420 01:46:23.610522  149242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.267267721s)
	I0420 01:46:23.610585  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:23.610606  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:23.610636  149242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.419363531s)
	I0420 01:46:23.610683  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:23.610700  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:23.612347  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:23.612355  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:23.612389  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:23.612365  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:23.612402  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:23.612435  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:23.612447  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:23.612460  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:23.612809  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:23.612817  149242 main.go:141] libmachine: (newest-cni-776287) DBG | Closing plugin on server side
	I0420 01:46:23.612822  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:23.612829  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:23.612838  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:23.612842  149242 addons.go:470] Verifying addon metrics-server=true in "newest-cni-776287"
	I0420 01:46:23.612856  149242 main.go:141] libmachine: (newest-cni-776287) DBG | Closing plugin on server side
	I0420 01:46:23.792528  149242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.11558119s)
	I0420 01:46:23.792594  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:23.792607  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:23.792957  149242 main.go:141] libmachine: (newest-cni-776287) DBG | Closing plugin on server side
	I0420 01:46:23.793046  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:23.793068  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:23.793083  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:23.793091  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:23.793347  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:23.793362  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:23.795177  149242 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-776287 addons enable metrics-server
	
	I0420 01:46:23.796789  149242 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0420 01:46:23.798364  149242 addons.go:505] duration metric: took 2.062810996s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0420 01:46:23.798408  149242 start.go:245] waiting for cluster config update ...
	I0420 01:46:23.798425  149242 start.go:254] writing updated cluster config ...
	I0420 01:46:23.798745  149242 ssh_runner.go:195] Run: rm -f paused
	I0420 01:46:23.850819  149242 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:46:23.852376  149242 out.go:177] * Done! kubectl is now configured to use "newest-cni-776287" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.650217230Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577603650196091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4aab6836-0e3a-4b9c-8fdd-9b562b2a5a2c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.651070772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2e53aaa-655f-44bc-9dec-26f19103c38a name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.651115420Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2e53aaa-655f-44bc-9dec-26f19103c38a name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.652058005Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:202ece012609f9d48bbeb0e85472fbe2a0b2b772ec62432f396f557d5dd946ef,PodSandboxId:d9514b8bf59030a3f2b9706716cb9a3a1e48b9b068137809131bb1ada06fc8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576645381855211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739478ce-5d74-4be0-8a39-d80245d8aa8a,},Annotations:map[string]string{io.kubernetes.container.hash: c4733f46,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9abbd634e052b9e52c32611fc47408d8fe7ee896d9ecd04d9cbf3aef12eccf57,PodSandboxId:8cf044dbd658fe3cc4049c9de56d760626ffee3cc09f9f48f6638101308a297d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576644812715862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p8dhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf589b6-f54b-4615-b95e-b95c89766e24,},Annotations:map[string]string{io.kubernetes.container.hash: 3ae49b9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8de8e15844cc717e83e053d00d6750706a13a63ee6c20708fc464bfc0c40a13,PodSandboxId:cf148ff5f383eb44911d9e17767aeb975ebd0641d07cce69aaae193b007f6dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576644686616341,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g2nzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d07ba546-0251-4862-ad1b-0c3d5ee7b1f3,},Annotations:map[string]string{io.kubernetes.container.hash: 75743cdc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1029b8d280d260e9ea03ba7dc34c5eb98fb8165005ba1384a6203dd82f91778,PodSandboxId:e1ff724454b3afe3852debf0a832c594c960a5a1f646ccaf1c32067d45c4f730,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713576643920587325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jt8wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ddf3ce-29f8-437d-bd31-89411c135012,},Annotations:map[string]string{io.kubernetes.container.hash: e3f6992f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3493c7f700417b4ac3a7012509af242191227e54e49a554081b5241815cd3348,PodSandboxId:848d25ed966118e3bd883b5a1bf8da4f8d784a733617553fec751ab79c816a85,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576623892130394,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0145c53c6d1d18df04cacd509389f3d8,},Annotations:map[string]string{io.kubernetes.container.hash: f7ec0c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61125bc317fa0d6fca2fcda5eb460565c80f64678312eed04da29b68611a9d7c,PodSandboxId:75b26293d3b581136cd8bab406ee66a9d9cff38b5938dc653dda92234053a82e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576623896231637,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379168c771e6417e18a246073d15f9b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd475dac8ad321c108d5e9229490f0dd02ddb9bd9b62f9eb94bfeefb2601b63,PodSandboxId:903e1234c68a3e116e214e544c3601b2d60ac9994a398f5434a019a336e9a1fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576623814718254,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961738bc673ea9c61e235980dd98ebef,},Annotations:map[string]string{io.kubernetes.container.hash: 3d86dfc3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ee53c8b79120f528279f8634a895830faba01476e27cd5b3d11b4941668772,PodSandboxId:451d2b570db13613edc36b85d7b695c2595739d7dc86a707da669b193982796f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576623784375326,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fcee6681b164edc8892802779b78785,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1e6d10e5f7a185afc4d60b1f18df284304bf292915c0fb179a33d7c7488a0a,PodSandboxId:e27cf517fa17d426db075bcc8002d45661d4fe411604f02c466eb6aae5d01fbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713576333481660906,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961738bc673ea9c61e235980dd98ebef,},Annotations:map[string]string{io.kubernetes.container.hash: 3d86dfc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2e53aaa-655f-44bc-9dec-26f19103c38a name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.692411126Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2bcaebf-cc58-41c1-9b25-bbf4f6b5ff90 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.692517265Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2bcaebf-cc58-41c1-9b25-bbf4f6b5ff90 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.695324982Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f76e26e4-989f-497b-af0a-72fb39892efe name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.695792144Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577603695769308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f76e26e4-989f-497b-af0a-72fb39892efe name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.696831411Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72709812-7499-4108-a1a4-5aa7d2239212 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.696989119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72709812-7499-4108-a1a4-5aa7d2239212 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.697182806Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:202ece012609f9d48bbeb0e85472fbe2a0b2b772ec62432f396f557d5dd946ef,PodSandboxId:d9514b8bf59030a3f2b9706716cb9a3a1e48b9b068137809131bb1ada06fc8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576645381855211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739478ce-5d74-4be0-8a39-d80245d8aa8a,},Annotations:map[string]string{io.kubernetes.container.hash: c4733f46,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9abbd634e052b9e52c32611fc47408d8fe7ee896d9ecd04d9cbf3aef12eccf57,PodSandboxId:8cf044dbd658fe3cc4049c9de56d760626ffee3cc09f9f48f6638101308a297d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576644812715862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p8dhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf589b6-f54b-4615-b95e-b95c89766e24,},Annotations:map[string]string{io.kubernetes.container.hash: 3ae49b9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8de8e15844cc717e83e053d00d6750706a13a63ee6c20708fc464bfc0c40a13,PodSandboxId:cf148ff5f383eb44911d9e17767aeb975ebd0641d07cce69aaae193b007f6dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576644686616341,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g2nzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d07ba546-0251-4862-ad1b-0c3d5ee7b1f3,},Annotations:map[string]string{io.kubernetes.container.hash: 75743cdc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1029b8d280d260e9ea03ba7dc34c5eb98fb8165005ba1384a6203dd82f91778,PodSandboxId:e1ff724454b3afe3852debf0a832c594c960a5a1f646ccaf1c32067d45c4f730,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713576643920587325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jt8wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ddf3ce-29f8-437d-bd31-89411c135012,},Annotations:map[string]string{io.kubernetes.container.hash: e3f6992f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3493c7f700417b4ac3a7012509af242191227e54e49a554081b5241815cd3348,PodSandboxId:848d25ed966118e3bd883b5a1bf8da4f8d784a733617553fec751ab79c816a85,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576623892130394,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0145c53c6d1d18df04cacd509389f3d8,},Annotations:map[string]string{io.kubernetes.container.hash: f7ec0c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61125bc317fa0d6fca2fcda5eb460565c80f64678312eed04da29b68611a9d7c,PodSandboxId:75b26293d3b581136cd8bab406ee66a9d9cff38b5938dc653dda92234053a82e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576623896231637,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379168c771e6417e18a246073d15f9b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd475dac8ad321c108d5e9229490f0dd02ddb9bd9b62f9eb94bfeefb2601b63,PodSandboxId:903e1234c68a3e116e214e544c3601b2d60ac9994a398f5434a019a336e9a1fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576623814718254,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961738bc673ea9c61e235980dd98ebef,},Annotations:map[string]string{io.kubernetes.container.hash: 3d86dfc3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ee53c8b79120f528279f8634a895830faba01476e27cd5b3d11b4941668772,PodSandboxId:451d2b570db13613edc36b85d7b695c2595739d7dc86a707da669b193982796f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576623784375326,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fcee6681b164edc8892802779b78785,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1e6d10e5f7a185afc4d60b1f18df284304bf292915c0fb179a33d7c7488a0a,PodSandboxId:e27cf517fa17d426db075bcc8002d45661d4fe411604f02c466eb6aae5d01fbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713576333481660906,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961738bc673ea9c61e235980dd98ebef,},Annotations:map[string]string{io.kubernetes.container.hash: 3d86dfc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72709812-7499-4108-a1a4-5aa7d2239212 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.736365306Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5b40928-ba95-4433-942f-bba87d0f97b3 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.736458703Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5b40928-ba95-4433-942f-bba87d0f97b3 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.738233948Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=929aa546-749b-4b61-8b0f-86e0051fb7ab name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.738767154Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577603738616444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=929aa546-749b-4b61-8b0f-86e0051fb7ab name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.739288629Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=731d6adf-06ad-4aa1-bc15-4abb18807aca name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.739367276Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=731d6adf-06ad-4aa1-bc15-4abb18807aca name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.739547829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:202ece012609f9d48bbeb0e85472fbe2a0b2b772ec62432f396f557d5dd946ef,PodSandboxId:d9514b8bf59030a3f2b9706716cb9a3a1e48b9b068137809131bb1ada06fc8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576645381855211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739478ce-5d74-4be0-8a39-d80245d8aa8a,},Annotations:map[string]string{io.kubernetes.container.hash: c4733f46,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9abbd634e052b9e52c32611fc47408d8fe7ee896d9ecd04d9cbf3aef12eccf57,PodSandboxId:8cf044dbd658fe3cc4049c9de56d760626ffee3cc09f9f48f6638101308a297d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576644812715862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p8dhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf589b6-f54b-4615-b95e-b95c89766e24,},Annotations:map[string]string{io.kubernetes.container.hash: 3ae49b9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8de8e15844cc717e83e053d00d6750706a13a63ee6c20708fc464bfc0c40a13,PodSandboxId:cf148ff5f383eb44911d9e17767aeb975ebd0641d07cce69aaae193b007f6dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576644686616341,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g2nzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d07ba546-0251-4862-ad1b-0c3d5ee7b1f3,},Annotations:map[string]string{io.kubernetes.container.hash: 75743cdc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1029b8d280d260e9ea03ba7dc34c5eb98fb8165005ba1384a6203dd82f91778,PodSandboxId:e1ff724454b3afe3852debf0a832c594c960a5a1f646ccaf1c32067d45c4f730,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713576643920587325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jt8wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ddf3ce-29f8-437d-bd31-89411c135012,},Annotations:map[string]string{io.kubernetes.container.hash: e3f6992f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3493c7f700417b4ac3a7012509af242191227e54e49a554081b5241815cd3348,PodSandboxId:848d25ed966118e3bd883b5a1bf8da4f8d784a733617553fec751ab79c816a85,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576623892130394,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0145c53c6d1d18df04cacd509389f3d8,},Annotations:map[string]string{io.kubernetes.container.hash: f7ec0c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61125bc317fa0d6fca2fcda5eb460565c80f64678312eed04da29b68611a9d7c,PodSandboxId:75b26293d3b581136cd8bab406ee66a9d9cff38b5938dc653dda92234053a82e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576623896231637,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379168c771e6417e18a246073d15f9b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd475dac8ad321c108d5e9229490f0dd02ddb9bd9b62f9eb94bfeefb2601b63,PodSandboxId:903e1234c68a3e116e214e544c3601b2d60ac9994a398f5434a019a336e9a1fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576623814718254,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961738bc673ea9c61e235980dd98ebef,},Annotations:map[string]string{io.kubernetes.container.hash: 3d86dfc3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ee53c8b79120f528279f8634a895830faba01476e27cd5b3d11b4941668772,PodSandboxId:451d2b570db13613edc36b85d7b695c2595739d7dc86a707da669b193982796f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576623784375326,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fcee6681b164edc8892802779b78785,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1e6d10e5f7a185afc4d60b1f18df284304bf292915c0fb179a33d7c7488a0a,PodSandboxId:e27cf517fa17d426db075bcc8002d45661d4fe411604f02c466eb6aae5d01fbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713576333481660906,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961738bc673ea9c61e235980dd98ebef,},Annotations:map[string]string{io.kubernetes.container.hash: 3d86dfc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=731d6adf-06ad-4aa1-bc15-4abb18807aca name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.774073699Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=08ce848c-a423-40a3-9b21-e5e6c1fdb0bf name=/runtime.v1.RuntimeService/Version
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.774161652Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08ce848c-a423-40a3-9b21-e5e6c1fdb0bf name=/runtime.v1.RuntimeService/Version
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.775138342Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de2bd360-63a0-4d13-886d-cbac1a405047 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.775501519Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577603775480966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de2bd360-63a0-4d13-886d-cbac1a405047 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.776166462Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68ecbd3d-6884-440b-9424-a9a3e68f734b name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.776244528Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68ecbd3d-6884-440b-9424-a9a3e68f734b name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:43 default-k8s-diff-port-907988 crio[724]: time="2024-04-20 01:46:43.776426887Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:202ece012609f9d48bbeb0e85472fbe2a0b2b772ec62432f396f557d5dd946ef,PodSandboxId:d9514b8bf59030a3f2b9706716cb9a3a1e48b9b068137809131bb1ada06fc8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576645381855211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739478ce-5d74-4be0-8a39-d80245d8aa8a,},Annotations:map[string]string{io.kubernetes.container.hash: c4733f46,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9abbd634e052b9e52c32611fc47408d8fe7ee896d9ecd04d9cbf3aef12eccf57,PodSandboxId:8cf044dbd658fe3cc4049c9de56d760626ffee3cc09f9f48f6638101308a297d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576644812715862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p8dhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bf589b6-f54b-4615-b95e-b95c89766e24,},Annotations:map[string]string{io.kubernetes.container.hash: 3ae49b9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8de8e15844cc717e83e053d00d6750706a13a63ee6c20708fc464bfc0c40a13,PodSandboxId:cf148ff5f383eb44911d9e17767aeb975ebd0641d07cce69aaae193b007f6dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576644686616341,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g2nzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d07ba546-0251-4862-ad1b-0c3d5ee7b1f3,},Annotations:map[string]string{io.kubernetes.container.hash: 75743cdc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1029b8d280d260e9ea03ba7dc34c5eb98fb8165005ba1384a6203dd82f91778,PodSandboxId:e1ff724454b3afe3852debf0a832c594c960a5a1f646ccaf1c32067d45c4f730,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713576643920587325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jt8wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ddf3ce-29f8-437d-bd31-89411c135012,},Annotations:map[string]string{io.kubernetes.container.hash: e3f6992f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3493c7f700417b4ac3a7012509af242191227e54e49a554081b5241815cd3348,PodSandboxId:848d25ed966118e3bd883b5a1bf8da4f8d784a733617553fec751ab79c816a85,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576623892130394,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0145c53c6d1d18df04cacd509389f3d8,},Annotations:map[string]string{io.kubernetes.container.hash: f7ec0c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61125bc317fa0d6fca2fcda5eb460565c80f64678312eed04da29b68611a9d7c,PodSandboxId:75b26293d3b581136cd8bab406ee66a9d9cff38b5938dc653dda92234053a82e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576623896231637,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379168c771e6417e18a246073d15f9b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd475dac8ad321c108d5e9229490f0dd02ddb9bd9b62f9eb94bfeefb2601b63,PodSandboxId:903e1234c68a3e116e214e544c3601b2d60ac9994a398f5434a019a336e9a1fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576623814718254,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961738bc673ea9c61e235980dd98ebef,},Annotations:map[string]string{io.kubernetes.container.hash: 3d86dfc3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ee53c8b79120f528279f8634a895830faba01476e27cd5b3d11b4941668772,PodSandboxId:451d2b570db13613edc36b85d7b695c2595739d7dc86a707da669b193982796f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576623784375326,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fcee6681b164edc8892802779b78785,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1e6d10e5f7a185afc4d60b1f18df284304bf292915c0fb179a33d7c7488a0a,PodSandboxId:e27cf517fa17d426db075bcc8002d45661d4fe411604f02c466eb6aae5d01fbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713576333481660906,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-907988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961738bc673ea9c61e235980dd98ebef,},Annotations:map[string]string{io.kubernetes.container.hash: 3d86dfc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68ecbd3d-6884-440b-9424-a9a3e68f734b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	202ece012609f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   d9514b8bf5903       storage-provisioner
	9abbd634e052b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   8cf044dbd658f       coredns-7db6d8ff4d-p8dhp
	c8de8e15844cc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   cf148ff5f383e       coredns-7db6d8ff4d-g2nzn
	b1029b8d280d2       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   15 minutes ago      Running             kube-proxy                0                   e1ff724454b3a       kube-proxy-jt8wr
	61125bc317fa0       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   16 minutes ago      Running             kube-scheduler            2                   75b26293d3b58       kube-scheduler-default-k8s-diff-port-907988
	3493c7f700417       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   848d25ed96611       etcd-default-k8s-diff-port-907988
	2dd475dac8ad3       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   16 minutes ago      Running             kube-apiserver            2                   903e1234c68a3       kube-apiserver-default-k8s-diff-port-907988
	78ee53c8b7912       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   16 minutes ago      Running             kube-controller-manager   2                   451d2b570db13       kube-controller-manager-default-k8s-diff-port-907988
	eb1e6d10e5f7a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   21 minutes ago      Exited              kube-apiserver            1                   e27cf517fa17d       kube-apiserver-default-k8s-diff-port-907988
	
	
	==> coredns [9abbd634e052b9e52c32611fc47408d8fe7ee896d9ecd04d9cbf3aef12eccf57] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [c8de8e15844cc717e83e053d00d6750706a13a63ee6c20708fc464bfc0c40a13] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-907988
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-907988
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=default-k8s-diff-port-907988
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T01_30_30_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 01:30:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-907988
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:46:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 01:46:07 +0000   Sat, 20 Apr 2024 01:30:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 01:46:07 +0000   Sat, 20 Apr 2024 01:30:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 01:46:07 +0000   Sat, 20 Apr 2024 01:30:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 01:46:07 +0000   Sat, 20 Apr 2024 01:30:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    default-k8s-diff-port-907988
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 95948a85dd4149018df21ee92061e8a2
	  System UUID:                95948a85-dd41-4901-8df2-1ee92061e8a2
	  Boot ID:                    bde3fa4b-3c5f-4fd7-ae40-27bd8d3743bf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-g2nzn                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-p8dhp                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-907988                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-907988             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-907988    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-jt8wr                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-907988             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-569cc877fc-6rgpj                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-907988 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-907988 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node default-k8s-diff-port-907988 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-907988 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-907988 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-907988 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-907988 event: Registered Node default-k8s-diff-port-907988 in Controller
	
	
	==> dmesg <==
	[  +0.042816] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.608699] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.251727] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.691158] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.279555] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.060309] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073536] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.192400] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.139925] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.309630] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[  +5.094941] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +0.063921] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.788924] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +5.575309] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.371245] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.969603] kauditd_printk_skb: 27 callbacks suppressed
	[Apr20 01:30] systemd-fstab-generator[3603]: Ignoring "noauto" option for root device
	[  +0.066581] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.998341] systemd-fstab-generator[3928]: Ignoring "noauto" option for root device
	[  +0.082095] kauditd_printk_skb: 54 callbacks suppressed
	[ +13.906241] systemd-fstab-generator[4126]: Ignoring "noauto" option for root device
	[  +0.107883] kauditd_printk_skb: 12 callbacks suppressed
	[Apr20 01:31] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [3493c7f700417b4ac3a7012509af242191227e54e49a554081b5241815cd3348] <==
	{"level":"info","ts":"2024-04-20T01:30:24.926022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-20T01:30:24.926121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-20T01:30:24.926172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 received MsgPreVoteResp from d8a7e113a49009a2 at term 1"}
	{"level":"info","ts":"2024-04-20T01:30:24.926203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became candidate at term 2"}
	{"level":"info","ts":"2024-04-20T01:30:24.926228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 received MsgVoteResp from d8a7e113a49009a2 at term 2"}
	{"level":"info","ts":"2024-04-20T01:30:24.926255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became leader at term 2"}
	{"level":"info","ts":"2024-04-20T01:30:24.926281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d8a7e113a49009a2 elected leader d8a7e113a49009a2 at term 2"}
	{"level":"info","ts":"2024-04-20T01:30:24.930169Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d8a7e113a49009a2","local-member-attributes":"{Name:default-k8s-diff-port-907988 ClientURLs:[https://192.168.39.222:2379]}","request-path":"/0/members/d8a7e113a49009a2/attributes","cluster-id":"26257d506d5fabfb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-20T01:30:24.931879Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:30:24.932112Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:30:24.932954Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-20T01:30:24.932994Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T01:30:24.933032Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:30:24.944133Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"26257d506d5fabfb","local-member-id":"d8a7e113a49009a2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:30:24.944236Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:30:24.944274Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:30:24.945684Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.222:2379"}
	{"level":"info","ts":"2024-04-20T01:30:24.94928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-20T01:40:25.007497Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":712}
	{"level":"info","ts":"2024-04-20T01:40:25.018169Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":712,"took":"10.055391ms","hash":2408490248,"current-db-size-bytes":2244608,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2244608,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-04-20T01:40:25.018271Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2408490248,"revision":712,"compact-revision":-1}
	{"level":"info","ts":"2024-04-20T01:45:04.655196Z","caller":"traceutil/trace.go:171","msg":"trace[1629655729] transaction","detail":"{read_only:false; response_revision:1182; number_of_response:1; }","duration":"130.72411ms","start":"2024-04-20T01:45:04.524423Z","end":"2024-04-20T01:45:04.655147Z","steps":["trace[1629655729] 'process raft request'  (duration: 130.432943ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T01:45:25.015571Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":956}
	{"level":"info","ts":"2024-04-20T01:45:25.020327Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":956,"took":"4.264205ms","hash":1935888620,"current-db-size-bytes":2244608,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1560576,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-20T01:45:25.020385Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1935888620,"revision":956,"compact-revision":712}
	
	
	==> kernel <==
	 01:46:44 up 21 min,  0 users,  load average: 0.53, 0.37, 0.29
	Linux default-k8s-diff-port-907988 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2dd475dac8ad321c108d5e9229490f0dd02ddb9bd9b62f9eb94bfeefb2601b63] <==
	I0420 01:41:27.974163       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:43:27.974073       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:43:27.974168       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0420 01:43:27.974178       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:43:27.977844       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:43:27.977995       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 01:43:27.978005       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:45:26.979562       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:45:26.979875       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0420 01:45:27.980227       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:45:27.980362       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 01:45:27.980403       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:45:27.980478       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:45:27.980506       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0420 01:45:27.982402       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:46:27.981104       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:46:27.981416       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 01:46:27.981457       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:46:27.982601       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:46:27.982653       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0420 01:46:27.982662       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [eb1e6d10e5f7a185afc4d60b1f18df284304bf292915c0fb179a33d7c7488a0a] <==
	W0420 01:30:19.402539       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.468479       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.483315       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.530840       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.551630       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.630519       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.651412       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.684536       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.709222       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.774495       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.787015       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.823519       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.964687       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:19.980178       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.146537       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.174306       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.178287       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.235674       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.260543       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.449749       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.493185       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.526561       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.622271       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.625230       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0420 01:30:20.672846       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [78ee53c8b79120f528279f8634a895830faba01476e27cd5b3d11b4941668772] <==
	I0420 01:41:13.036034       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:41:42.547846       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:41:43.049008       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0420 01:42:07.556272       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="360.692µs"
	E0420 01:42:12.554611       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:42:13.060028       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0420 01:42:20.551454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="78.69µs"
	E0420 01:42:42.560741       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:42:43.070469       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:43:12.566740       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:43:13.079601       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:43:42.577368       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:43:43.089099       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:44:12.583545       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:44:13.098855       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:44:42.590887       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:44:43.108291       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:45:12.598229       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:45:13.118228       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:45:42.604177       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:45:43.128161       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:46:12.610894       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:46:13.140567       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:46:42.615901       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:46:43.149279       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b1029b8d280d260e9ea03ba7dc34c5eb98fb8165005ba1384a6203dd82f91778] <==
	I0420 01:30:44.313172       1 server_linux.go:69] "Using iptables proxy"
	I0420 01:30:44.329303       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.222"]
	I0420 01:30:44.445267       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 01:30:44.445304       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 01:30:44.445317       1 server_linux.go:165] "Using iptables Proxier"
	I0420 01:30:44.477802       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 01:30:44.478010       1 server.go:872] "Version info" version="v1.30.0"
	I0420 01:30:44.478025       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:30:44.533362       1 config.go:192] "Starting service config controller"
	I0420 01:30:44.533405       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 01:30:44.533440       1 config.go:101] "Starting endpoint slice config controller"
	I0420 01:30:44.533445       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 01:30:44.534148       1 config.go:319] "Starting node config controller"
	I0420 01:30:44.534156       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 01:30:44.635546       1 shared_informer.go:320] Caches are synced for node config
	I0420 01:30:44.635570       1 shared_informer.go:320] Caches are synced for service config
	I0420 01:30:44.635594       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [61125bc317fa0d6fca2fcda5eb460565c80f64678312eed04da29b68611a9d7c] <==
	W0420 01:30:27.027816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0420 01:30:27.028431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0420 01:30:27.027622       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 01:30:27.028496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0420 01:30:27.843272       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0420 01:30:27.843466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0420 01:30:27.843306       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 01:30:27.843547       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 01:30:27.880197       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 01:30:27.880282       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 01:30:27.961732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0420 01:30:27.961848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0420 01:30:27.963493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 01:30:27.963556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0420 01:30:28.148353       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0420 01:30:28.148896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0420 01:30:28.155485       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 01:30:28.155688       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 01:30:28.199829       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 01:30:28.199999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 01:30:28.299553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0420 01:30:28.299749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0420 01:30:28.310718       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0420 01:30:28.310865       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0420 01:30:30.809633       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 20 01:44:29 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:44:29.558442    3935 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:44:29 default-k8s-diff-port-907988 kubelet[3935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:44:29 default-k8s-diff-port-907988 kubelet[3935]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:44:29 default-k8s-diff-port-907988 kubelet[3935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:44:29 default-k8s-diff-port-907988 kubelet[3935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:44:40 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:44:40.535801    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:44:53 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:44:53.536802    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:45:06 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:45:06.536430    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:45:17 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:45:17.534819    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:45:29 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:45:29.559343    3935 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:45:29 default-k8s-diff-port-907988 kubelet[3935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:45:29 default-k8s-diff-port-907988 kubelet[3935]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:45:29 default-k8s-diff-port-907988 kubelet[3935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:45:29 default-k8s-diff-port-907988 kubelet[3935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:45:30 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:45:30.534682    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:45:43 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:45:43.536085    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:45:58 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:45:58.536209    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:46:09 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:46:09.537632    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:46:21 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:46:21.537263    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	Apr 20 01:46:29 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:46:29.558449    3935 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:46:29 default-k8s-diff-port-907988 kubelet[3935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:46:29 default-k8s-diff-port-907988 kubelet[3935]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:46:29 default-k8s-diff-port-907988 kubelet[3935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:46:29 default-k8s-diff-port-907988 kubelet[3935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:46:35 default-k8s-diff-port-907988 kubelet[3935]: E0420 01:46:35.534629    3935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6rgpj" podUID="70cba472-11c4-4604-a4ad-3575ccedf005"
	
	
	==> storage-provisioner [202ece012609f9d48bbeb0e85472fbe2a0b2b772ec62432f396f557d5dd946ef] <==
	I0420 01:30:45.549330       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0420 01:30:45.561699       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0420 01:30:45.562894       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0420 01:30:45.577575       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0420 01:30:45.578027       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-907988_b7fd2827-c4c4-4970-bf24-b6ff22e80e25!
	I0420 01:30:45.578508       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"78244ffc-cc6f-4be5-807c-3078b98a5438", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-907988_b7fd2827-c4c4-4970-bf24-b6ff22e80e25 became leader
	I0420 01:30:45.678705       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-907988_b7fd2827-c4c4-4970-bf24-b6ff22e80e25!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-907988 -n default-k8s-diff-port-907988
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-907988 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-6rgpj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-907988 describe pod metrics-server-569cc877fc-6rgpj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-907988 describe pod metrics-server-569cc877fc-6rgpj: exit status 1 (60.11671ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-6rgpj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-907988 describe pod metrics-server-569cc877fc-6rgpj: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (412.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (367.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-269507 -n embed-certs-269507
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-20 01:46:24.406036718 +0000 UTC m=+6557.920904107
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-269507 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-269507 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.885µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-269507 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-269507 -n embed-certs-269507
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-269507 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-269507 logs -n 25: (1.42606668s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-338118             | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:17 UTC | 20 Apr 24 01:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-907988  | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC | 20 Apr 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC |                     |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-269507            | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC | 20 Apr 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-564860        | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-338118                  | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-907988       | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:30 UTC |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-269507                 | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-564860             | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:44 UTC | 20 Apr 24 01:44 UTC |
	| start   | -p newest-cni-776287 --memory=2200 --alsologtostderr   | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:44 UTC | 20 Apr 24 01:45 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:45 UTC | 20 Apr 24 01:45 UTC |
	| addons  | enable metrics-server -p newest-cni-776287             | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:45 UTC | 20 Apr 24 01:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-776287                                   | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:45 UTC | 20 Apr 24 01:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-776287                  | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:45 UTC | 20 Apr 24 01:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-776287 --memory=2200 --alsologtostderr   | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:45 UTC | 20 Apr 24 01:46 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-776287 image list                           | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:46 UTC | 20 Apr 24 01:46 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-776287                                   | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:46 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 01:45:45
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 01:45:45.545747  149242 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:45:45.545845  149242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:45:45.545853  149242 out.go:304] Setting ErrFile to fd 2...
	I0420 01:45:45.545857  149242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:45:45.546037  149242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:45:45.546584  149242 out.go:298] Setting JSON to false
	I0420 01:45:45.547521  149242 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":16093,"bootTime":1713561453,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 01:45:45.547581  149242 start.go:139] virtualization: kvm guest
	I0420 01:45:45.549970  149242 out.go:177] * [newest-cni-776287] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 01:45:45.551389  149242 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:45:45.551383  149242 notify.go:220] Checking for updates...
	I0420 01:45:45.552706  149242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:45:45.553997  149242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:45:45.555166  149242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:45:45.556326  149242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 01:45:45.557519  149242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:45:45.559314  149242 config.go:182] Loaded profile config "newest-cni-776287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:45:45.559976  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:45:45.560043  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:45:45.575366  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36161
	I0420 01:45:45.575794  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:45:45.576293  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:45:45.576313  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:45:45.576699  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:45:45.576907  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:45:45.577168  149242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:45:45.577527  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:45:45.577570  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:45:45.593011  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I0420 01:45:45.593360  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:45:45.593792  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:45:45.593811  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:45:45.594122  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:45:45.594334  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:45:45.630879  149242 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 01:45:45.632102  149242 start.go:297] selected driver: kvm2
	I0420 01:45:45.632116  149242 start.go:901] validating driver "kvm2" against &{Name:newest-cni-776287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-776287 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.191 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[
] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:45:45.632239  149242 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:45:45.632875  149242 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:45:45.632950  149242 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 01:45:45.647874  149242 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 01:45:45.648357  149242 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0420 01:45:45.648413  149242 cni.go:84] Creating CNI manager for ""
	I0420 01:45:45.648425  149242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:45:45.648465  149242 start.go:340] cluster config:
	{Name:newest-cni-776287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-776287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.191 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:45:45.648580  149242 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:45:45.650367  149242 out.go:177] * Starting "newest-cni-776287" primary control-plane node in "newest-cni-776287" cluster
	I0420 01:45:45.651618  149242 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:45:45.651664  149242 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0420 01:45:45.651674  149242 cache.go:56] Caching tarball of preloaded images
	I0420 01:45:45.651740  149242 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 01:45:45.651751  149242 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 01:45:45.652181  149242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/config.json ...
	I0420 01:45:45.652446  149242 start.go:360] acquireMachinesLock for newest-cni-776287: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:45:45.652491  149242 start.go:364] duration metric: took 26.274µs to acquireMachinesLock for "newest-cni-776287"
	I0420 01:45:45.652504  149242 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:45:45.652513  149242 fix.go:54] fixHost starting: 
	I0420 01:45:45.653067  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:45:45.653107  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:45:45.666896  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45041
	I0420 01:45:45.667282  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:45:45.667763  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:45:45.667782  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:45:45.668106  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:45:45.668270  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:45:45.668375  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetState
	I0420 01:45:45.670080  149242 fix.go:112] recreateIfNeeded on newest-cni-776287: state=Stopped err=<nil>
	I0420 01:45:45.670105  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	W0420 01:45:45.670276  149242 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:45:45.672028  149242 out.go:177] * Restarting existing kvm2 VM for "newest-cni-776287" ...
	I0420 01:45:45.673276  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Start
	I0420 01:45:45.673458  149242 main.go:141] libmachine: (newest-cni-776287) Ensuring networks are active...
	I0420 01:45:45.674242  149242 main.go:141] libmachine: (newest-cni-776287) Ensuring network default is active
	I0420 01:45:45.674616  149242 main.go:141] libmachine: (newest-cni-776287) Ensuring network mk-newest-cni-776287 is active
	I0420 01:45:45.674909  149242 main.go:141] libmachine: (newest-cni-776287) Getting domain xml...
	I0420 01:45:45.675567  149242 main.go:141] libmachine: (newest-cni-776287) Creating domain...
	I0420 01:45:46.878253  149242 main.go:141] libmachine: (newest-cni-776287) Waiting to get IP...
	I0420 01:45:46.879119  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:46.879598  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:46.879665  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:46.879560  149277 retry.go:31] will retry after 238.242433ms: waiting for machine to come up
	I0420 01:45:47.119199  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:47.119788  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:47.119815  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:47.119741  149277 retry.go:31] will retry after 241.219006ms: waiting for machine to come up
	I0420 01:45:47.362225  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:47.362712  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:47.362736  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:47.362657  149277 retry.go:31] will retry after 382.193297ms: waiting for machine to come up
	I0420 01:45:47.745943  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:47.746450  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:47.746478  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:47.746406  149277 retry.go:31] will retry after 452.25947ms: waiting for machine to come up
	I0420 01:45:48.200226  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:48.200700  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:48.200722  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:48.200650  149277 retry.go:31] will retry after 483.119811ms: waiting for machine to come up
	I0420 01:45:48.685397  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:48.685950  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:48.685990  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:48.685923  149277 retry.go:31] will retry after 760.841312ms: waiting for machine to come up
	I0420 01:45:49.448068  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:49.448533  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:49.448562  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:49.448479  149277 retry.go:31] will retry after 1.003742184s: waiting for machine to come up
	I0420 01:45:50.453596  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:50.454049  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:50.454077  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:50.454016  149277 retry.go:31] will retry after 1.167943095s: waiting for machine to come up
	I0420 01:45:51.623572  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:51.624052  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:51.624084  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:51.624000  149277 retry.go:31] will retry after 1.860901587s: waiting for machine to come up
	I0420 01:45:53.486439  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:53.486887  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:53.486921  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:53.486855  149277 retry.go:31] will retry after 2.19188582s: waiting for machine to come up
	I0420 01:45:55.680620  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:55.681068  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:55.681125  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:55.681050  149277 retry.go:31] will retry after 2.67498922s: waiting for machine to come up
	I0420 01:45:58.358618  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:45:58.359132  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:45:58.359164  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:45:58.359082  149277 retry.go:31] will retry after 3.197223234s: waiting for machine to come up
	I0420 01:46:01.557512  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:01.557968  149242 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:46:01.557997  149242 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:46:01.557931  149277 retry.go:31] will retry after 3.39301121s: waiting for machine to come up
	I0420 01:46:04.954477  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:04.955146  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has current primary IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:04.955180  149242 main.go:141] libmachine: (newest-cni-776287) Found IP for machine: 192.168.61.191
	I0420 01:46:04.955220  149242 main.go:141] libmachine: (newest-cni-776287) Reserving static IP address...
	I0420 01:46:04.955593  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "newest-cni-776287", mac: "52:54:00:e3:cd:b1", ip: "192.168.61.191"} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:04.955614  149242 main.go:141] libmachine: (newest-cni-776287) DBG | skip adding static IP to network mk-newest-cni-776287 - found existing host DHCP lease matching {name: "newest-cni-776287", mac: "52:54:00:e3:cd:b1", ip: "192.168.61.191"}
	I0420 01:46:04.955636  149242 main.go:141] libmachine: (newest-cni-776287) Reserved static IP address: 192.168.61.191
	I0420 01:46:04.955652  149242 main.go:141] libmachine: (newest-cni-776287) DBG | Getting to WaitForSSH function...
	I0420 01:46:04.955670  149242 main.go:141] libmachine: (newest-cni-776287) Waiting for SSH to be available...
	I0420 01:46:04.957798  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:04.958134  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:04.958169  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:04.958234  149242 main.go:141] libmachine: (newest-cni-776287) DBG | Using SSH client type: external
	I0420 01:46:04.958265  149242 main.go:141] libmachine: (newest-cni-776287) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa (-rw-------)
	I0420 01:46:04.958308  149242 main.go:141] libmachine: (newest-cni-776287) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.191 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:46:04.958321  149242 main.go:141] libmachine: (newest-cni-776287) DBG | About to run SSH command:
	I0420 01:46:04.958330  149242 main.go:141] libmachine: (newest-cni-776287) DBG | exit 0
	I0420 01:46:05.085729  149242 main.go:141] libmachine: (newest-cni-776287) DBG | SSH cmd err, output: <nil>: 
	I0420 01:46:05.086153  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetConfigRaw
	I0420 01:46:05.086827  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetIP
	I0420 01:46:05.089453  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.089787  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:05.089812  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.090062  149242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/config.json ...
	I0420 01:46:05.090256  149242 machine.go:94] provisionDockerMachine start ...
	I0420 01:46:05.090276  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:05.090494  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:05.092812  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.093129  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:05.093158  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.093269  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:05.093482  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:05.093669  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:05.093797  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:05.093975  149242 main.go:141] libmachine: Using SSH client type: native
	I0420 01:46:05.094206  149242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I0420 01:46:05.094221  149242 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:46:05.211572  149242 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:46:05.211603  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetMachineName
	I0420 01:46:05.211866  149242 buildroot.go:166] provisioning hostname "newest-cni-776287"
	I0420 01:46:05.211900  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetMachineName
	I0420 01:46:05.212137  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:05.215162  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.215517  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:05.215560  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.215629  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:05.215853  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:05.216085  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:05.216281  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:05.216465  149242 main.go:141] libmachine: Using SSH client type: native
	I0420 01:46:05.216638  149242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I0420 01:46:05.216651  149242 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-776287 && echo "newest-cni-776287" | sudo tee /etc/hostname
	I0420 01:46:05.351876  149242 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-776287
	
	I0420 01:46:05.351906  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:05.355253  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.355687  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:05.355710  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.355948  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:05.356151  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:05.356299  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:05.356445  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:05.356634  149242 main.go:141] libmachine: Using SSH client type: native
	I0420 01:46:05.356882  149242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I0420 01:46:05.356912  149242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-776287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-776287/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-776287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:46:05.485228  149242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:46:05.485268  149242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:46:05.485296  149242 buildroot.go:174] setting up certificates
	I0420 01:46:05.485328  149242 provision.go:84] configureAuth start
	I0420 01:46:05.485344  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetMachineName
	I0420 01:46:05.485668  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetIP
	I0420 01:46:05.488280  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.488695  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:05.488726  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.488861  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:05.491013  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.491321  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:05.491347  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.491499  149242 provision.go:143] copyHostCerts
	I0420 01:46:05.491554  149242 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:46:05.491564  149242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:46:05.491636  149242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:46:05.491749  149242 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:46:05.491759  149242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:46:05.491787  149242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:46:05.491854  149242 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:46:05.491862  149242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:46:05.491893  149242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:46:05.491951  149242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.newest-cni-776287 san=[127.0.0.1 192.168.61.191 localhost minikube newest-cni-776287]
	I0420 01:46:05.875486  149242 provision.go:177] copyRemoteCerts
	I0420 01:46:05.875548  149242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:46:05.875578  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:05.878259  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.878640  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:05.878671  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:05.878844  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:05.879040  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:05.879206  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:05.879311  149242 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:46:05.964503  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:46:05.992896  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0420 01:46:06.019923  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:46:06.048801  149242 provision.go:87] duration metric: took 563.457283ms to configureAuth
	I0420 01:46:06.048837  149242 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:46:06.049061  149242 config.go:182] Loaded profile config "newest-cni-776287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:46:06.049162  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:06.051954  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.052309  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:06.052351  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.052550  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:06.052772  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:06.052947  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:06.053125  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:06.053294  149242 main.go:141] libmachine: Using SSH client type: native
	I0420 01:46:06.053533  149242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I0420 01:46:06.053557  149242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:46:06.357790  149242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:46:06.357822  149242 machine.go:97] duration metric: took 1.267552277s to provisionDockerMachine
	I0420 01:46:06.357834  149242 start.go:293] postStartSetup for "newest-cni-776287" (driver="kvm2")
	I0420 01:46:06.357845  149242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:46:06.357867  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:06.358265  149242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:46:06.358304  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:06.361147  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.361545  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:06.361574  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.361730  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:06.361926  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:06.362108  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:06.362280  149242 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:46:06.449156  149242 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:46:06.454234  149242 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:46:06.454259  149242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:46:06.454345  149242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:46:06.454451  149242 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:46:06.454573  149242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:46:06.465046  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:46:06.495003  149242 start.go:296] duration metric: took 137.154256ms for postStartSetup
	I0420 01:46:06.495041  149242 fix.go:56] duration metric: took 20.842527537s for fixHost
	I0420 01:46:06.495066  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:06.497825  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.498247  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:06.498274  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.498461  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:06.498673  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:06.498860  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:06.499063  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:06.499221  149242 main.go:141] libmachine: Using SSH client type: native
	I0420 01:46:06.499406  149242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I0420 01:46:06.499418  149242 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:46:06.614980  149242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713577566.594337317
	
	I0420 01:46:06.615002  149242 fix.go:216] guest clock: 1713577566.594337317
	I0420 01:46:06.615010  149242 fix.go:229] Guest: 2024-04-20 01:46:06.594337317 +0000 UTC Remote: 2024-04-20 01:46:06.495044536 +0000 UTC m=+20.996820664 (delta=99.292781ms)
	I0420 01:46:06.615029  149242 fix.go:200] guest clock delta is within tolerance: 99.292781ms
	I0420 01:46:06.615033  149242 start.go:83] releasing machines lock for "newest-cni-776287", held for 20.962535545s
	I0420 01:46:06.615051  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:06.615325  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetIP
	I0420 01:46:06.618179  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.618550  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:06.618587  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.618707  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:06.619305  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:06.619480  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:06.619619  149242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:46:06.619698  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:06.619719  149242 ssh_runner.go:195] Run: cat /version.json
	I0420 01:46:06.619735  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:06.622513  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.622536  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.622957  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:06.622997  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:06.623024  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.623066  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:06.623177  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:06.623350  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:06.623403  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:06.623549  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:06.623553  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:06.623735  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:06.623733  149242 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:46:06.623902  149242 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:46:06.706725  149242 ssh_runner.go:195] Run: systemctl --version
	I0420 01:46:06.728359  149242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:46:06.870696  149242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:46:06.878547  149242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:46:06.878631  149242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:46:06.897121  149242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:46:06.897148  149242 start.go:494] detecting cgroup driver to use...
	I0420 01:46:06.897201  149242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:46:06.915872  149242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:46:06.932406  149242 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:46:06.932471  149242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:46:06.947748  149242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:46:06.966622  149242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:46:07.110766  149242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:46:07.278034  149242 docker.go:233] disabling docker service ...
	I0420 01:46:07.278104  149242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:46:07.296001  149242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:46:07.311682  149242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:46:07.474546  149242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:46:07.611807  149242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:46:07.628597  149242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:46:07.649520  149242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:46:07.649589  149242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:46:07.661780  149242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:46:07.661862  149242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:46:07.674043  149242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:46:07.686222  149242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:46:07.698285  149242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:46:07.710717  149242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:46:07.723422  149242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:46:07.741910  149242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:46:07.753719  149242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:46:07.764549  149242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:46:07.764626  149242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:46:07.780430  149242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:46:07.791658  149242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:46:07.917090  149242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:46:08.069930  149242 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:46:08.070015  149242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:46:08.076068  149242 start.go:562] Will wait 60s for crictl version
	I0420 01:46:08.076130  149242 ssh_runner.go:195] Run: which crictl
	I0420 01:46:08.080509  149242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:46:08.128220  149242 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:46:08.128317  149242 ssh_runner.go:195] Run: crio --version
	I0420 01:46:08.161201  149242 ssh_runner.go:195] Run: crio --version
	I0420 01:46:08.195132  149242 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:46:08.196382  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetIP
	I0420 01:46:08.199186  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:08.199633  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:08.199656  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:08.199933  149242 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0420 01:46:08.205344  149242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:46:08.221288  149242 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0420 01:46:08.222637  149242 kubeadm.go:877] updating cluster {Name:newest-cni-776287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-776287 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.191 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress
: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:46:08.222760  149242 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:46:08.222823  149242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:46:08.264215  149242 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:46:08.264296  149242 ssh_runner.go:195] Run: which lz4
	I0420 01:46:08.269386  149242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:46:08.274606  149242 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:46:08.274647  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:46:10.053160  149242 crio.go:462] duration metric: took 1.78380931s to copy over tarball
	I0420 01:46:10.053243  149242 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:46:12.669769  149242 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.616493997s)
	I0420 01:46:12.669814  149242 crio.go:469] duration metric: took 2.616625212s to extract the tarball
	I0420 01:46:12.669823  149242 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:46:12.711292  149242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:46:12.768241  149242 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:46:12.768270  149242 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:46:12.768281  149242 kubeadm.go:928] updating node { 192.168.61.191 8443 v1.30.0 crio true true} ...
	I0420 01:46:12.768495  149242 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-776287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:newest-cni-776287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:46:12.768570  149242 ssh_runner.go:195] Run: crio config
	I0420 01:46:12.825723  149242 cni.go:84] Creating CNI manager for ""
	I0420 01:46:12.825746  149242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:46:12.825761  149242 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0420 01:46:12.825785  149242 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.191 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-776287 NodeName:newest-cni-776287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.61.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:46:12.825962  149242 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-776287"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.191
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:46:12.826064  149242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:46:12.837359  149242 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:46:12.837432  149242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:46:12.847688  149242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0420 01:46:12.866105  149242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:46:12.884770  149242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0420 01:46:12.904337  149242 ssh_runner.go:195] Run: grep 192.168.61.191	control-plane.minikube.internal$ /etc/hosts
	I0420 01:46:12.908708  149242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.191	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:46:12.922093  149242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:46:13.063277  149242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:46:13.084324  149242 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287 for IP: 192.168.61.191
	I0420 01:46:13.084351  149242 certs.go:194] generating shared ca certs ...
	I0420 01:46:13.084403  149242 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:46:13.084627  149242 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:46:13.084702  149242 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:46:13.084719  149242 certs.go:256] generating profile certs ...
	I0420 01:46:13.084827  149242 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/client.key
	I0420 01:46:13.084905  149242 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.key.e52dbc46
	I0420 01:46:13.084958  149242 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/proxy-client.key
	I0420 01:46:13.085071  149242 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:46:13.085114  149242 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:46:13.085128  149242 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:46:13.085158  149242 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:46:13.085196  149242 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:46:13.085236  149242 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:46:13.085296  149242 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:46:13.086292  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:46:13.131380  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:46:13.178504  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:46:13.222559  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:46:13.266862  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0420 01:46:13.304248  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0420 01:46:13.335900  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:46:13.368532  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:46:13.397073  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:46:13.425107  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:46:13.452826  149242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:46:13.479851  149242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:46:13.499455  149242 ssh_runner.go:195] Run: openssl version
	I0420 01:46:13.506096  149242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:46:13.518557  149242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:46:13.524133  149242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:46:13.524200  149242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:46:13.530388  149242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:46:13.542816  149242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:46:13.555635  149242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:46:13.561072  149242 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:46:13.561143  149242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:46:13.567887  149242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:46:13.580371  149242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:46:13.592643  149242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:46:13.598085  149242 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:46:13.598148  149242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:46:13.604865  149242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:46:13.616781  149242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:46:13.622290  149242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:46:13.629892  149242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:46:13.636564  149242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:46:13.643826  149242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:46:13.650398  149242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:46:13.657707  149242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:46:13.664262  149242 kubeadm.go:391] StartCluster: {Name:newest-cni-776287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-776287 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.191 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: N
etwork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:46:13.664346  149242 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:46:13.664399  149242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:46:13.707757  149242 cri.go:89] found id: ""
	I0420 01:46:13.707849  149242 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:46:13.718926  149242 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:46:13.718973  149242 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:46:13.718987  149242 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:46:13.719070  149242 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:46:13.731007  149242 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:46:13.737951  149242 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-776287" does not appear in /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:46:13.738591  149242 kubeconfig.go:62] /home/jenkins/minikube-integration/18703-76456/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-776287" cluster setting kubeconfig missing "newest-cni-776287" context setting]
	I0420 01:46:13.739488  149242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:46:13.820710  149242 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:46:13.832524  149242 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.191
	I0420 01:46:13.832571  149242 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:46:13.832583  149242 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:46:13.832652  149242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:46:13.879863  149242 cri.go:89] found id: ""
	I0420 01:46:13.879973  149242 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:46:13.900869  149242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:46:13.914445  149242 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:46:13.914470  149242 kubeadm.go:156] found existing configuration files:
	
	I0420 01:46:13.914523  149242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:46:13.924942  149242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:46:13.924995  149242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:46:13.935703  149242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:46:13.947364  149242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:46:13.947429  149242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:46:13.957830  149242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:46:13.967527  149242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:46:13.967595  149242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:46:13.977980  149242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:46:13.987638  149242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:46:13.987683  149242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:46:13.997676  149242 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:46:14.007968  149242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:46:14.139667  149242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:46:15.195874  149242 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.056164067s)
	I0420 01:46:15.195909  149242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:46:15.414638  149242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:46:15.487388  149242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:46:15.573436  149242 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:46:15.573512  149242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:46:16.074562  149242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:46:16.574049  149242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:46:16.630963  149242 api_server.go:72] duration metric: took 1.0575267s to wait for apiserver process to appear ...
	I0420 01:46:16.630998  149242 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:46:16.631019  149242 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I0420 01:46:16.631562  149242 api_server.go:269] stopped: https://192.168.61.191:8443/healthz: Get "https://192.168.61.191:8443/healthz": dial tcp 192.168.61.191:8443: connect: connection refused
	I0420 01:46:17.131088  149242 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I0420 01:46:19.567510  149242 api_server.go:279] https://192.168.61.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:46:19.567548  149242 api_server.go:103] status: https://192.168.61.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:46:19.567564  149242 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I0420 01:46:19.590373  149242 api_server.go:279] https://192.168.61.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:46:19.590399  149242 api_server.go:103] status: https://192.168.61.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:46:19.631651  149242 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I0420 01:46:19.647816  149242 api_server.go:279] https://192.168.61.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:46:19.647856  149242 api_server.go:103] status: https://192.168.61.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:46:20.131374  149242 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I0420 01:46:20.137384  149242 api_server.go:279] https://192.168.61.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:46:20.137411  149242 api_server.go:103] status: https://192.168.61.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:46:20.631082  149242 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I0420 01:46:20.640612  149242 api_server.go:279] https://192.168.61.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:46:20.640640  149242 api_server.go:103] status: https://192.168.61.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:46:21.131774  149242 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I0420 01:46:21.140058  149242 api_server.go:279] https://192.168.61.191:8443/healthz returned 200:
	ok
	I0420 01:46:21.168333  149242 api_server.go:141] control plane version: v1.30.0
	I0420 01:46:21.168361  149242 api_server.go:131] duration metric: took 4.537355807s to wait for apiserver health ...
	I0420 01:46:21.168371  149242 cni.go:84] Creating CNI manager for ""
	I0420 01:46:21.168377  149242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:46:21.170195  149242 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:46:21.171575  149242 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:46:21.195417  149242 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:46:21.255917  149242 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:46:21.274060  149242 system_pods.go:59] 8 kube-system pods found
	I0420 01:46:21.274109  149242 system_pods.go:61] "coredns-7db6d8ff4d-s79q5" [1e743f7e-a708-49e6-97fc-772bfb86bd1c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:46:21.274125  149242 system_pods.go:61] "etcd-newest-cni-776287" [da504341-0d60-43a1-aa84-2c6f9f8ad005] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:46:21.274135  149242 system_pods.go:61] "kube-apiserver-newest-cni-776287" [723f9cc0-666c-43d9-abc9-d32948a2847b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:46:21.274151  149242 system_pods.go:61] "kube-controller-manager-newest-cni-776287" [9edc1ff3-4f5c-4c86-93bf-ead0cf5d81c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:46:21.274161  149242 system_pods.go:61] "kube-proxy-bdmnr" [8ab3ad83-4e89-4871-bae6-eadf6611e259] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0420 01:46:21.274171  149242 system_pods.go:61] "kube-scheduler-newest-cni-776287" [cbe24b8c-a717-4d02-85c5-2b4c4c66914b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0420 01:46:21.274181  149242 system_pods.go:61] "metrics-server-569cc877fc-m42jf" [9840799c-5af6-4143-8531-65fc3bf48118] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:46:21.274191  149242 system_pods.go:61] "storage-provisioner" [d3bd0842-64d3-4df8-b59e-270ca31e20ac] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:46:21.274203  149242 system_pods.go:74] duration metric: took 18.258555ms to wait for pod list to return data ...
	I0420 01:46:21.274216  149242 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:46:21.280419  149242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:46:21.280453  149242 node_conditions.go:123] node cpu capacity is 2
	I0420 01:46:21.280465  149242 node_conditions.go:105] duration metric: took 6.240264ms to run NodePressure ...
	I0420 01:46:21.280490  149242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:46:21.716213  149242 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:46:21.733026  149242 ops.go:34] apiserver oom_adj: -16
	I0420 01:46:21.733052  149242 kubeadm.go:591] duration metric: took 8.014057015s to restartPrimaryControlPlane
	I0420 01:46:21.733063  149242 kubeadm.go:393] duration metric: took 8.068806353s to StartCluster
	I0420 01:46:21.733084  149242 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:46:21.733172  149242 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:46:21.735155  149242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:46:21.735485  149242 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.191 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:46:21.737241  149242 out.go:177] * Verifying Kubernetes components...
	I0420 01:46:21.735565  149242 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:46:21.735766  149242 config.go:182] Loaded profile config "newest-cni-776287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:46:21.738588  149242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:46:21.738616  149242 addons.go:69] Setting default-storageclass=true in profile "newest-cni-776287"
	I0420 01:46:21.738637  149242 addons.go:69] Setting metrics-server=true in profile "newest-cni-776287"
	I0420 01:46:21.738662  149242 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-776287"
	I0420 01:46:21.738664  149242 addons.go:69] Setting dashboard=true in profile "newest-cni-776287"
	I0420 01:46:21.738622  149242 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-776287"
	I0420 01:46:21.738689  149242 addons.go:234] Setting addon dashboard=true in "newest-cni-776287"
	I0420 01:46:21.738689  149242 addons.go:234] Setting addon metrics-server=true in "newest-cni-776287"
	W0420 01:46:21.738697  149242 addons.go:243] addon dashboard should already be in state true
	W0420 01:46:21.738701  149242 addons.go:243] addon metrics-server should already be in state true
	I0420 01:46:21.738716  149242 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-776287"
	I0420 01:46:21.738726  149242 host.go:66] Checking if "newest-cni-776287" exists ...
	W0420 01:46:21.738730  149242 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:46:21.738740  149242 host.go:66] Checking if "newest-cni-776287" exists ...
	I0420 01:46:21.738768  149242 host.go:66] Checking if "newest-cni-776287" exists ...
	I0420 01:46:21.738998  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.739047  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.739130  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.739155  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.739165  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.739172  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.739177  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.739199  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.756500  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43197
	I0420 01:46:21.756504  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35407
	I0420 01:46:21.757346  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.757366  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.757351  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0420 01:46:21.757949  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.757950  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.757986  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39233
	I0420 01:46:21.757970  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.758001  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.757992  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.758405  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.758471  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.758606  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.758618  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.759073  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.759101  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.759113  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.759147  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.759319  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.759398  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.759839  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.759864  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.760132  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.760180  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.760357  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.760711  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetState
	I0420 01:46:21.764481  149242 addons.go:234] Setting addon default-storageclass=true in "newest-cni-776287"
	W0420 01:46:21.764503  149242 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:46:21.764538  149242 host.go:66] Checking if "newest-cni-776287" exists ...
	I0420 01:46:21.764893  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.764930  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.779043  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41679
	I0420 01:46:21.779386  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
	I0420 01:46:21.779527  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.780138  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.780163  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.780187  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.780667  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.780690  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.780705  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.781040  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetState
	I0420 01:46:21.781088  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.781257  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0420 01:46:21.781425  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42979
	I0420 01:46:21.781438  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetState
	I0420 01:46:21.781835  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.781923  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.782511  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.782537  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.783197  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.783236  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:21.783280  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.783318  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.785443  149242 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0420 01:46:21.783771  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.783851  149242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:46:21.785570  149242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:46:21.784344  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:21.786063  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetState
	I0420 01:46:21.789207  149242 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:46:21.787620  149242 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0420 01:46:21.789813  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:21.790628  149242 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:46:21.790644  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:46:21.790659  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:21.792211  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0420 01:46:21.792226  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0420 01:46:21.792241  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:21.794092  149242 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:46:21.795793  149242 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:46:21.795812  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:46:21.795828  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:21.794064  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:21.795899  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:21.795921  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:21.795034  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:21.795542  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:21.796176  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:21.796263  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:21.796288  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:21.796300  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:21.796490  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:21.796534  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:21.796797  149242 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:46:21.797099  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:21.797275  149242 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:46:21.798665  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:21.799026  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:21.799058  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:21.799185  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:21.799336  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:21.799488  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:21.799629  149242 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:46:21.803998  149242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45313
	I0420 01:46:21.804419  149242 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:46:21.804887  149242 main.go:141] libmachine: Using API Version  1
	I0420 01:46:21.804900  149242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:46:21.805265  149242 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:46:21.805455  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetState
	I0420 01:46:21.806787  149242 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:46:21.807075  149242 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:46:21.807112  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:46:21.807130  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:46:21.809421  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:21.809654  149242 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:45:58 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:46:21.809683  149242 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:46:21.809840  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:46:21.810030  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:46:21.810153  149242 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:46:21.810302  149242 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:46:21.986819  149242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:46:22.009563  149242 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:46:22.009651  149242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:46:22.026657  149242 api_server.go:72] duration metric: took 291.126793ms to wait for apiserver process to appear ...
	I0420 01:46:22.026687  149242 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:46:22.026708  149242 api_server.go:253] Checking apiserver healthz at https://192.168.61.191:8443/healthz ...
	I0420 01:46:22.033480  149242 api_server.go:279] https://192.168.61.191:8443/healthz returned 200:
	ok
	I0420 01:46:22.034801  149242 api_server.go:141] control plane version: v1.30.0
	I0420 01:46:22.034821  149242 api_server.go:131] duration metric: took 8.126943ms to wait for apiserver health ...
	I0420 01:46:22.034829  149242 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:46:22.045680  149242 system_pods.go:59] 8 kube-system pods found
	I0420 01:46:22.045713  149242 system_pods.go:61] "coredns-7db6d8ff4d-s79q5" [1e743f7e-a708-49e6-97fc-772bfb86bd1c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:46:22.045723  149242 system_pods.go:61] "etcd-newest-cni-776287" [da504341-0d60-43a1-aa84-2c6f9f8ad005] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:46:22.045735  149242 system_pods.go:61] "kube-apiserver-newest-cni-776287" [723f9cc0-666c-43d9-abc9-d32948a2847b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:46:22.045751  149242 system_pods.go:61] "kube-controller-manager-newest-cni-776287" [9edc1ff3-4f5c-4c86-93bf-ead0cf5d81c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:46:22.045758  149242 system_pods.go:61] "kube-proxy-bdmnr" [8ab3ad83-4e89-4871-bae6-eadf6611e259] Running
	I0420 01:46:22.045770  149242 system_pods.go:61] "kube-scheduler-newest-cni-776287" [cbe24b8c-a717-4d02-85c5-2b4c4c66914b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0420 01:46:22.045777  149242 system_pods.go:61] "metrics-server-569cc877fc-m42jf" [9840799c-5af6-4143-8531-65fc3bf48118] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:46:22.045781  149242 system_pods.go:61] "storage-provisioner" [d3bd0842-64d3-4df8-b59e-270ca31e20ac] Running
	I0420 01:46:22.045787  149242 system_pods.go:74] duration metric: took 10.952797ms to wait for pod list to return data ...
	I0420 01:46:22.045796  149242 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:46:22.048136  149242 default_sa.go:45] found service account: "default"
	I0420 01:46:22.048158  149242 default_sa.go:55] duration metric: took 2.353757ms for default service account to be created ...
	I0420 01:46:22.048172  149242 kubeadm.go:576] duration metric: took 312.646021ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0420 01:46:22.048198  149242 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:46:22.050374  149242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:46:22.050397  149242 node_conditions.go:123] node cpu capacity is 2
	I0420 01:46:22.050408  149242 node_conditions.go:105] duration metric: took 2.201022ms to run NodePressure ...
	I0420 01:46:22.050422  149242 start.go:240] waiting for startup goroutines ...
	I0420 01:46:22.066658  149242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:46:22.128303  149242 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:46:22.128325  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:46:22.171786  149242 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:46:22.171816  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:46:22.191227  149242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:46:22.212912  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0420 01:46:22.212940  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0420 01:46:22.246026  149242 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:46:22.246053  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:46:22.283090  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0420 01:46:22.283124  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0420 01:46:22.315938  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0420 01:46:22.315968  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0420 01:46:22.343205  149242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:46:22.381372  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0420 01:46:22.381400  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0420 01:46:22.423120  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0420 01:46:22.423157  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0420 01:46:22.444409  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:22.444444  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:22.444750  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:22.444776  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:22.444785  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:22.444794  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:22.445056  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:22.445075  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:22.445097  149242 main.go:141] libmachine: (newest-cni-776287) DBG | Closing plugin on server side
	I0420 01:46:22.457606  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:22.457633  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:22.457917  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:22.457935  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:22.478412  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0420 01:46:22.478433  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0420 01:46:22.521933  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0420 01:46:22.521962  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0420 01:46:22.583580  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0420 01:46:22.583609  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0420 01:46:22.641064  149242 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0420 01:46:22.641092  149242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0420 01:46:22.676894  149242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0420 01:46:23.610522  149242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.267267721s)
	I0420 01:46:23.610585  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:23.610606  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:23.610636  149242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.419363531s)
	I0420 01:46:23.610683  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:23.610700  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:23.612347  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:23.612355  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:23.612389  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:23.612365  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:23.612402  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:23.612435  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:23.612447  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:23.612460  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:23.612809  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:23.612817  149242 main.go:141] libmachine: (newest-cni-776287) DBG | Closing plugin on server side
	I0420 01:46:23.612822  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:23.612829  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:23.612838  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:23.612842  149242 addons.go:470] Verifying addon metrics-server=true in "newest-cni-776287"
	I0420 01:46:23.612856  149242 main.go:141] libmachine: (newest-cni-776287) DBG | Closing plugin on server side
	I0420 01:46:23.792528  149242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.11558119s)
	I0420 01:46:23.792594  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:23.792607  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:23.792957  149242 main.go:141] libmachine: (newest-cni-776287) DBG | Closing plugin on server side
	I0420 01:46:23.793046  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:23.793068  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:23.793083  149242 main.go:141] libmachine: Making call to close driver server
	I0420 01:46:23.793091  149242 main.go:141] libmachine: (newest-cni-776287) Calling .Close
	I0420 01:46:23.793347  149242 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:46:23.793362  149242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:46:23.795177  149242 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-776287 addons enable metrics-server
	
	I0420 01:46:23.796789  149242 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0420 01:46:23.798364  149242 addons.go:505] duration metric: took 2.062810996s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0420 01:46:23.798408  149242 start.go:245] waiting for cluster config update ...
	I0420 01:46:23.798425  149242 start.go:254] writing updated cluster config ...
	I0420 01:46:23.798745  149242 ssh_runner.go:195] Run: rm -f paused
	I0420 01:46:23.850819  149242 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:46:23.852376  149242 out.go:177] * Done! kubectl is now configured to use "newest-cni-776287" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.175760488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577585175736288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43f8109d-e4b4-4ef8-b130-1f14cb7af6a2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.176273162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2eb89a5-f38d-4a1b-932f-8a9b77a6c84a name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.176348109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2eb89a5-f38d-4a1b-932f-8a9b77a6c84a name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.176672775Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f2d91a77303a1d1f78754f56e8285673fa3c82912968524268bfb82b6862551,PodSandboxId:1b2078fead88b76fc01ca7f4f074f851b9b2853cf803174f93ac997d33777513,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576671860326870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eee97ab-bb31-4a3d-be80-845b6545e897,},Annotations:map[string]string{io.kubernetes.container.hash: 85082f9b,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58f2094ebcfb7e4ca383a963f4d25356b7e36c999dd36a68860bdfa92b86c086,PodSandboxId:355e142949e4aeedc1349c185fbc3654ab3b0d991137f3acc7d4244c1d1a6207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576671126164065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mpf5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 331105fe-dd08-409f-9b2d-658b958cd1a2,},Annotations:map[string]string{io.kubernetes.container.hash: 36ae1744,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:461315561eb163aacbbaed2f1f488aa988fbe30040ac563863c1971ec2dfa4db,PodSandboxId:a7ab732b7acbf6ac6a83270716540afb22bbf82735a7e9ee6f008b0bd7fce058,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576671069787308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ltzhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
ca2da30-b908-46fc-a028-d43a17c6307e,},Annotations:map[string]string{io.kubernetes.container.hash: 26cea6eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72866d395a259b41dd1568062ae9f525efd098c649089030d7dbe358475b416,PodSandboxId:7a109603aadde1c036974353a80ab912dc9b05b75d5d168569946085f8061861,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt
:1713576670070973648,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4x66x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75da8306-56f8-49bf-a2e7-cf5d4877dc16,},Annotations:map[string]string{io.kubernetes.container.hash: ff495a6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e929f7269b44770eeed9b70db7bf4f6c1f9b43e4d3b6575e87fa13f4bf4a84e,PodSandboxId:faef3224e215842fb808283749bdc3849cd4418d90ea5322b30b53e16c3a9b78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576650473340132
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdd3b00bf785377bddc8fab82d6d99a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6d359f3a3d53d4edd8d3cf64481a586b3ab86d0a85e8ba6801990806ced8348,PodSandboxId:f984207e0813470103daec5dcbd25b7c89c868b8cbb0f729335c5f96f477a78c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576650491322267,Lab
els:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0587f14b2460eaf30de6c91e37166938,},Annotations:map[string]string{io.kubernetes.container.hash: 674a4080,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b85cfa748850504f7c20bbab2dc0000a90942d5a67a20269950485735cb292,PodSandboxId:d84eb958169e26f38279504c69921ef93b6c4df49a25416b8857176ca186a813,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576650457196099,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23ca2077f4fbb1e69605853c38ebffe8,},Annotations:map[string]string{io.kubernetes.container.hash: 293887a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c404e808b8cc8e4567f11f28c04624da4cf3a2f178f7e2146de9374c146072,PodSandboxId:93b7f48a0a1a553f1a77d60546e223d09ffdabedc551991be913d51d8e94b7f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576650449185388,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b975a1148b62f00792b68b5fc13bb267,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2eb89a5-f38d-4a1b-932f-8a9b77a6c84a name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.221356581Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=64ac1fd3-cf47-4580-8eed-da2da3f14062 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.221426527Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=64ac1fd3-cf47-4580-8eed-da2da3f14062 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.223774824Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e7cfc079-e2e0-499a-9fcb-d99ac48f996d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.224173735Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577585224152889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7cfc079-e2e0-499a-9fcb-d99ac48f996d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.224843307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbb5f5ea-cfe5-431d-b426-9c8f231de30f name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.224928514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbb5f5ea-cfe5-431d-b426-9c8f231de30f name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.225129292Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f2d91a77303a1d1f78754f56e8285673fa3c82912968524268bfb82b6862551,PodSandboxId:1b2078fead88b76fc01ca7f4f074f851b9b2853cf803174f93ac997d33777513,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576671860326870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eee97ab-bb31-4a3d-be80-845b6545e897,},Annotations:map[string]string{io.kubernetes.container.hash: 85082f9b,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58f2094ebcfb7e4ca383a963f4d25356b7e36c999dd36a68860bdfa92b86c086,PodSandboxId:355e142949e4aeedc1349c185fbc3654ab3b0d991137f3acc7d4244c1d1a6207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576671126164065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mpf5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 331105fe-dd08-409f-9b2d-658b958cd1a2,},Annotations:map[string]string{io.kubernetes.container.hash: 36ae1744,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:461315561eb163aacbbaed2f1f488aa988fbe30040ac563863c1971ec2dfa4db,PodSandboxId:a7ab732b7acbf6ac6a83270716540afb22bbf82735a7e9ee6f008b0bd7fce058,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576671069787308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ltzhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
ca2da30-b908-46fc-a028-d43a17c6307e,},Annotations:map[string]string{io.kubernetes.container.hash: 26cea6eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72866d395a259b41dd1568062ae9f525efd098c649089030d7dbe358475b416,PodSandboxId:7a109603aadde1c036974353a80ab912dc9b05b75d5d168569946085f8061861,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt
:1713576670070973648,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4x66x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75da8306-56f8-49bf-a2e7-cf5d4877dc16,},Annotations:map[string]string{io.kubernetes.container.hash: ff495a6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e929f7269b44770eeed9b70db7bf4f6c1f9b43e4d3b6575e87fa13f4bf4a84e,PodSandboxId:faef3224e215842fb808283749bdc3849cd4418d90ea5322b30b53e16c3a9b78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576650473340132
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdd3b00bf785377bddc8fab82d6d99a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6d359f3a3d53d4edd8d3cf64481a586b3ab86d0a85e8ba6801990806ced8348,PodSandboxId:f984207e0813470103daec5dcbd25b7c89c868b8cbb0f729335c5f96f477a78c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576650491322267,Lab
els:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0587f14b2460eaf30de6c91e37166938,},Annotations:map[string]string{io.kubernetes.container.hash: 674a4080,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b85cfa748850504f7c20bbab2dc0000a90942d5a67a20269950485735cb292,PodSandboxId:d84eb958169e26f38279504c69921ef93b6c4df49a25416b8857176ca186a813,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576650457196099,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23ca2077f4fbb1e69605853c38ebffe8,},Annotations:map[string]string{io.kubernetes.container.hash: 293887a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c404e808b8cc8e4567f11f28c04624da4cf3a2f178f7e2146de9374c146072,PodSandboxId:93b7f48a0a1a553f1a77d60546e223d09ffdabedc551991be913d51d8e94b7f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576650449185388,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b975a1148b62f00792b68b5fc13bb267,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cbb5f5ea-cfe5-431d-b426-9c8f231de30f name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.274942143Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59cbb0e3-4272-4ce7-adbd-62968f41a378 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.275014277Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59cbb0e3-4272-4ce7-adbd-62968f41a378 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.277273338Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c296fa75-b4bb-4638-afc5-00d247e38733 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.278073895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577585278044192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c296fa75-b4bb-4638-afc5-00d247e38733 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.279101593Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ee41ce0-e796-4389-9e27-5c65ad1fdb1f name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.279159317Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ee41ce0-e796-4389-9e27-5c65ad1fdb1f name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.279363849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f2d91a77303a1d1f78754f56e8285673fa3c82912968524268bfb82b6862551,PodSandboxId:1b2078fead88b76fc01ca7f4f074f851b9b2853cf803174f93ac997d33777513,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576671860326870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eee97ab-bb31-4a3d-be80-845b6545e897,},Annotations:map[string]string{io.kubernetes.container.hash: 85082f9b,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58f2094ebcfb7e4ca383a963f4d25356b7e36c999dd36a68860bdfa92b86c086,PodSandboxId:355e142949e4aeedc1349c185fbc3654ab3b0d991137f3acc7d4244c1d1a6207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576671126164065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mpf5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 331105fe-dd08-409f-9b2d-658b958cd1a2,},Annotations:map[string]string{io.kubernetes.container.hash: 36ae1744,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:461315561eb163aacbbaed2f1f488aa988fbe30040ac563863c1971ec2dfa4db,PodSandboxId:a7ab732b7acbf6ac6a83270716540afb22bbf82735a7e9ee6f008b0bd7fce058,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576671069787308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ltzhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
ca2da30-b908-46fc-a028-d43a17c6307e,},Annotations:map[string]string{io.kubernetes.container.hash: 26cea6eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72866d395a259b41dd1568062ae9f525efd098c649089030d7dbe358475b416,PodSandboxId:7a109603aadde1c036974353a80ab912dc9b05b75d5d168569946085f8061861,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt
:1713576670070973648,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4x66x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75da8306-56f8-49bf-a2e7-cf5d4877dc16,},Annotations:map[string]string{io.kubernetes.container.hash: ff495a6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e929f7269b44770eeed9b70db7bf4f6c1f9b43e4d3b6575e87fa13f4bf4a84e,PodSandboxId:faef3224e215842fb808283749bdc3849cd4418d90ea5322b30b53e16c3a9b78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576650473340132
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdd3b00bf785377bddc8fab82d6d99a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6d359f3a3d53d4edd8d3cf64481a586b3ab86d0a85e8ba6801990806ced8348,PodSandboxId:f984207e0813470103daec5dcbd25b7c89c868b8cbb0f729335c5f96f477a78c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576650491322267,Lab
els:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0587f14b2460eaf30de6c91e37166938,},Annotations:map[string]string{io.kubernetes.container.hash: 674a4080,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b85cfa748850504f7c20bbab2dc0000a90942d5a67a20269950485735cb292,PodSandboxId:d84eb958169e26f38279504c69921ef93b6c4df49a25416b8857176ca186a813,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576650457196099,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23ca2077f4fbb1e69605853c38ebffe8,},Annotations:map[string]string{io.kubernetes.container.hash: 293887a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c404e808b8cc8e4567f11f28c04624da4cf3a2f178f7e2146de9374c146072,PodSandboxId:93b7f48a0a1a553f1a77d60546e223d09ffdabedc551991be913d51d8e94b7f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576650449185388,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b975a1148b62f00792b68b5fc13bb267,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ee41ce0-e796-4389-9e27-5c65ad1fdb1f name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.320788895Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d2cc2d1-b4e6-41bb-b33d-fe814607c468 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.320912359Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d2cc2d1-b4e6-41bb-b33d-fe814607c468 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.322122570Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bce637d8-9ea9-4f2f-a78f-32cad03d4198 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.322633935Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577585322610456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bce637d8-9ea9-4f2f-a78f-32cad03d4198 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.324144117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=772764b9-1f15-4384-8bfa-a2bcd1faac0f name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.324198933Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=772764b9-1f15-4384-8bfa-a2bcd1faac0f name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:46:25 embed-certs-269507 crio[729]: time="2024-04-20 01:46:25.324380171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f2d91a77303a1d1f78754f56e8285673fa3c82912968524268bfb82b6862551,PodSandboxId:1b2078fead88b76fc01ca7f4f074f851b9b2853cf803174f93ac997d33777513,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576671860326870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eee97ab-bb31-4a3d-be80-845b6545e897,},Annotations:map[string]string{io.kubernetes.container.hash: 85082f9b,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58f2094ebcfb7e4ca383a963f4d25356b7e36c999dd36a68860bdfa92b86c086,PodSandboxId:355e142949e4aeedc1349c185fbc3654ab3b0d991137f3acc7d4244c1d1a6207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576671126164065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mpf5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 331105fe-dd08-409f-9b2d-658b958cd1a2,},Annotations:map[string]string{io.kubernetes.container.hash: 36ae1744,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:461315561eb163aacbbaed2f1f488aa988fbe30040ac563863c1971ec2dfa4db,PodSandboxId:a7ab732b7acbf6ac6a83270716540afb22bbf82735a7e9ee6f008b0bd7fce058,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576671069787308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ltzhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
ca2da30-b908-46fc-a028-d43a17c6307e,},Annotations:map[string]string{io.kubernetes.container.hash: 26cea6eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72866d395a259b41dd1568062ae9f525efd098c649089030d7dbe358475b416,PodSandboxId:7a109603aadde1c036974353a80ab912dc9b05b75d5d168569946085f8061861,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt
:1713576670070973648,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4x66x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75da8306-56f8-49bf-a2e7-cf5d4877dc16,},Annotations:map[string]string{io.kubernetes.container.hash: ff495a6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e929f7269b44770eeed9b70db7bf4f6c1f9b43e4d3b6575e87fa13f4bf4a84e,PodSandboxId:faef3224e215842fb808283749bdc3849cd4418d90ea5322b30b53e16c3a9b78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576650473340132
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdd3b00bf785377bddc8fab82d6d99a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6d359f3a3d53d4edd8d3cf64481a586b3ab86d0a85e8ba6801990806ced8348,PodSandboxId:f984207e0813470103daec5dcbd25b7c89c868b8cbb0f729335c5f96f477a78c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576650491322267,Lab
els:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0587f14b2460eaf30de6c91e37166938,},Annotations:map[string]string{io.kubernetes.container.hash: 674a4080,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b85cfa748850504f7c20bbab2dc0000a90942d5a67a20269950485735cb292,PodSandboxId:d84eb958169e26f38279504c69921ef93b6c4df49a25416b8857176ca186a813,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576650457196099,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23ca2077f4fbb1e69605853c38ebffe8,},Annotations:map[string]string{io.kubernetes.container.hash: 293887a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c404e808b8cc8e4567f11f28c04624da4cf3a2f178f7e2146de9374c146072,PodSandboxId:93b7f48a0a1a553f1a77d60546e223d09ffdabedc551991be913d51d8e94b7f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576650449185388,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-269507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b975a1148b62f00792b68b5fc13bb267,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=772764b9-1f15-4384-8bfa-a2bcd1faac0f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1f2d91a77303a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   1b2078fead88b       storage-provisioner
	58f2094ebcfb7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   355e142949e4a       coredns-7db6d8ff4d-mpf5l
	461315561eb16       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   a7ab732b7acbf       coredns-7db6d8ff4d-ltzhp
	a72866d395a25       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   15 minutes ago      Running             kube-proxy                0                   7a109603aadde       kube-proxy-4x66x
	c6d359f3a3d53       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   f984207e08134       etcd-embed-certs-269507
	3e929f7269b44       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   15 minutes ago      Running             kube-controller-manager   2                   faef3224e2158       kube-controller-manager-embed-certs-269507
	d9b85cfa74885       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   15 minutes ago      Running             kube-apiserver            2                   d84eb958169e2       kube-apiserver-embed-certs-269507
	a8c404e808b8c       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   15 minutes ago      Running             kube-scheduler            2                   93b7f48a0a1a5       kube-scheduler-embed-certs-269507
	
	
	==> coredns [461315561eb163aacbbaed2f1f488aa988fbe30040ac563863c1971ec2dfa4db] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [58f2094ebcfb7e4ca383a963f4d25356b7e36c999dd36a68860bdfa92b86c086] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-269507
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-269507
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=embed-certs-269507
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T01_30_56_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 01:30:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-269507
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:46:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 01:41:28 +0000   Sat, 20 Apr 2024 01:30:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 01:41:28 +0000   Sat, 20 Apr 2024 01:30:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 01:41:28 +0000   Sat, 20 Apr 2024 01:30:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 01:41:28 +0000   Sat, 20 Apr 2024 01:30:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.184
	  Hostname:    embed-certs-269507
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 70baa34e90ac40738e978058d7b85f6a
	  System UUID:                70baa34e-90ac-4073-8e97-8058d7b85f6a
	  Boot ID:                    3953aa30-c7ca-4505-9da5-da799418c0c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-ltzhp                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-mpf5l                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-269507                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-269507             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-269507    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-4x66x                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-269507             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-569cc877fc-jwbst               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-269507 status is now: NodeHasSufficientMemory
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-269507 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-269507 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-269507 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-269507 event: Registered Node embed-certs-269507 in Controller
	
	
	==> dmesg <==
	[  +0.052987] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050652] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.831860] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.516649] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.566600] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.299096] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.067945] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.088762] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.261597] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.145621] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.338116] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[  +5.264487] systemd-fstab-generator[811]: Ignoring "noauto" option for root device
	[  +0.064832] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.364859] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +4.644663] kauditd_printk_skb: 97 callbacks suppressed
	[Apr20 01:26] kauditd_printk_skb: 84 callbacks suppressed
	[Apr20 01:30] kauditd_printk_skb: 7 callbacks suppressed
	[  +2.036376] systemd-fstab-generator[3618]: Ignoring "noauto" option for root device
	[  +4.495256] kauditd_printk_skb: 55 callbacks suppressed
	[  +2.087822] systemd-fstab-generator[3943]: Ignoring "noauto" option for root device
	[Apr20 01:31] systemd-fstab-generator[4141]: Ignoring "noauto" option for root device
	[  +0.123323] kauditd_printk_skb: 14 callbacks suppressed
	[Apr20 01:32] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [c6d359f3a3d53d4edd8d3cf64481a586b3ab86d0a85e8ba6801990806ced8348] <==
	{"level":"info","ts":"2024-04-20T01:30:51.872208Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dfaeaf2ad25a061e","local-member-id":"bf2ced3b97aa693f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:30:51.872319Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:30:51.872358Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:30:51.875569Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-20T01:30:51.875619Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T01:30:51.877538Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	2024/04/20 01:30:56 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-20T01:40:51.943416Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":682}
	{"level":"info","ts":"2024-04-20T01:40:51.953406Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":682,"took":"9.342036ms","hash":4207126743,"current-db-size-bytes":2359296,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2359296,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-04-20T01:40:51.953601Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4207126743,"revision":682,"compact-revision":-1}
	{"level":"info","ts":"2024-04-20T01:45:51.950716Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":925}
	{"level":"info","ts":"2024-04-20T01:45:51.955107Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":925,"took":"3.489404ms","hash":1550819724,"current-db-size-bytes":2359296,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1626112,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-20T01:45:51.955184Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1550819724,"revision":925,"compact-revision":682}
	{"level":"info","ts":"2024-04-20T01:46:13.776148Z","caller":"traceutil/trace.go:171","msg":"trace[2064643726] transaction","detail":"{read_only:false; response_revision:1186; number_of_response:1; }","duration":"516.366211ms","start":"2024-04-20T01:46:13.259648Z","end":"2024-04-20T01:46:13.776014Z","steps":["trace[2064643726] 'process raft request'  (duration: 516.210334ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T01:46:13.777561Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:46:13.259623Z","time spent":"516.660624ms","remote":"127.0.0.1:42002","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1185 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-20T01:46:14.118397Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.390144ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7583937498161144939 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-269507\" mod_revision:1179 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-269507\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-269507\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-20T01:46:14.118904Z","caller":"traceutil/trace.go:171","msg":"trace[2043073828] linearizableReadLoop","detail":"{readStateIndex:1387; appliedIndex:1386; }","duration":"644.450587ms","start":"2024-04-20T01:46:13.474436Z","end":"2024-04-20T01:46:14.118887Z","steps":["trace[2043073828] 'read index received'  (duration: 302.389997ms)","trace[2043073828] 'applied index is now lower than readState.Index'  (duration: 342.05908ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-20T01:46:14.119034Z","caller":"traceutil/trace.go:171","msg":"trace[394754558] transaction","detail":"{read_only:false; response_revision:1187; number_of_response:1; }","duration":"780.586516ms","start":"2024-04-20T01:46:13.338431Z","end":"2024-04-20T01:46:14.119017Z","steps":["trace[394754558] 'process raft request'  (duration: 554.310329ms)","trace[394754558] 'compare'  (duration: 225.141562ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-20T01:46:14.119214Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"420.630403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-04-20T01:46:14.119244Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:46:13.338415Z","time spent":"780.767377ms","remote":"127.0.0.1:42108","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":561,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-269507\" mod_revision:1179 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-269507\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-269507\" > >"}
	{"level":"info","ts":"2024-04-20T01:46:14.119272Z","caller":"traceutil/trace.go:171","msg":"trace[818447289] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1187; }","duration":"420.81566ms","start":"2024-04-20T01:46:13.698446Z","end":"2024-04-20T01:46:14.119262Z","steps":["trace[818447289] 'agreement among raft nodes before linearized reading'  (duration: 420.646283ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T01:46:14.119398Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:46:13.698421Z","time spent":"420.96597ms","remote":"127.0.0.1:42028","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":29,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2024-04-20T01:46:14.11907Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"644.660553ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-20T01:46:14.119647Z","caller":"traceutil/trace.go:171","msg":"trace[1369499408] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1187; }","duration":"645.267202ms","start":"2024-04-20T01:46:13.474366Z","end":"2024-04-20T01:46:14.119633Z","steps":["trace[1369499408] 'agreement among raft nodes before linearized reading'  (duration: 644.662683ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T01:46:14.119689Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:46:13.47435Z","time spent":"645.328223ms","remote":"127.0.0.1:41826","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	
	
	==> kernel <==
	 01:46:25 up 20 min,  0 users,  load average: 0.35, 0.34, 0.19
	Linux embed-certs-269507 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d9b85cfa748850504f7c20bbab2dc0000a90942d5a67a20269950485735cb292] <==
	W0420 01:43:54.370020       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:43:54.370115       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0420 01:43:54.370125       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:43:54.372397       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:43:54.372655       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 01:43:54.372695       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:45:53.373847       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:45:53.374242       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0420 01:45:54.374791       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:45:54.374895       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 01:45:54.374908       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:45:54.374992       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:45:54.375052       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0420 01:45:54.376397       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0420 01:46:13.778412       1 trace.go:236] Trace[1210764844]: "Update" accept:application/json, */*,audit-id:3b256f16-417b-456a-b581-c760b88bc460,client:192.168.50.184,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (20-Apr-2024 01:46:13.257) (total time: 521ms):
	Trace[1210764844]: ["GuaranteedUpdate etcd3" audit-id:3b256f16-417b-456a-b581-c760b88bc460,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 520ms (01:46:13.257)
	Trace[1210764844]:  ---"Txn call completed" 519ms (01:46:13.778)]
	Trace[1210764844]: [521.19539ms] [521.19539ms] END
	I0420 01:46:14.120639       1 trace.go:236] Trace[344125479]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:3e21b1ed-1ec7-4d11-b8f7-a08185dda0ce,client:192.168.50.184,api-group:coordination.k8s.io,api-version:v1,name:embed-certs-269507,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/embed-certs-269507,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PUT (20-Apr-2024 01:46:13.336) (total time: 783ms):
	Trace[344125479]: ["GuaranteedUpdate etcd3" audit-id:3e21b1ed-1ec7-4d11-b8f7-a08185dda0ce,key:/leases/kube-node-lease/embed-certs-269507,type:*coordination.Lease,resource:leases.coordination.k8s.io 783ms (01:46:13.337)
	Trace[344125479]:  ---"Txn call completed" 782ms (01:46:14.120)]
	Trace[344125479]: [783.638116ms] [783.638116ms] END
	
	
	==> kube-controller-manager [3e929f7269b44770eeed9b70db7bf4f6c1f9b43e4d3b6575e87fa13f4bf4a84e] <==
	I0420 01:40:39.461534       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:41:08.953403       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:41:09.473558       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:41:38.958756       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:41:39.482570       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:42:08.965037       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:42:09.491026       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0420 01:42:15.266778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="341.045µs"
	I0420 01:42:26.266020       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="103.866µs"
	E0420 01:42:38.972200       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:42:39.499728       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:43:08.978609       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:43:09.508569       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:43:38.984848       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:43:39.517578       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:44:08.992345       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:44:09.526028       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:44:38.999218       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:44:39.536792       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:45:09.004997       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:45:09.547602       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:45:39.011056       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:45:39.556145       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:46:09.020106       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:46:09.566190       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a72866d395a259b41dd1568062ae9f525efd098c649089030d7dbe358475b416] <==
	I0420 01:31:10.446907       1 server_linux.go:69] "Using iptables proxy"
	I0420 01:31:10.491661       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.184"]
	I0420 01:31:10.606687       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 01:31:10.606749       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 01:31:10.606764       1 server_linux.go:165] "Using iptables Proxier"
	I0420 01:31:10.610661       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 01:31:10.610851       1 server.go:872] "Version info" version="v1.30.0"
	I0420 01:31:10.610867       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:31:10.613085       1 config.go:192] "Starting service config controller"
	I0420 01:31:10.613100       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 01:31:10.613207       1 config.go:101] "Starting endpoint slice config controller"
	I0420 01:31:10.613214       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 01:31:10.614116       1 config.go:319] "Starting node config controller"
	I0420 01:31:10.614125       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 01:31:10.713222       1 shared_informer.go:320] Caches are synced for service config
	I0420 01:31:10.713274       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 01:31:10.714440       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a8c404e808b8cc8e4567f11f28c04624da4cf3a2f178f7e2146de9374c146072] <==
	E0420 01:30:53.387295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0420 01:30:53.387537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0420 01:30:54.210018       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0420 01:30:54.210204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0420 01:30:54.249389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0420 01:30:54.249591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0420 01:30:54.393930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0420 01:30:54.394133       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0420 01:30:54.445254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 01:30:54.445311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 01:30:54.498081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0420 01:30:54.499130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0420 01:30:54.527567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0420 01:30:54.527624       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0420 01:30:54.553415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0420 01:30:54.553628       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0420 01:30:54.555166       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 01:30:54.555292       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 01:30:54.592672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0420 01:30:54.592733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0420 01:30:54.679211       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0420 01:30:54.679241       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0420 01:30:54.692201       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 01:30:54.692325       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0420 01:30:56.973896       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 20 01:43:56 embed-certs-269507 kubelet[3950]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:43:56 embed-certs-269507 kubelet[3950]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:43:56 embed-certs-269507 kubelet[3950]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:43:56 embed-certs-269507 kubelet[3950]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:44:07 embed-certs-269507 kubelet[3950]: E0420 01:44:07.247544    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:44:21 embed-certs-269507 kubelet[3950]: E0420 01:44:21.248643    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:44:35 embed-certs-269507 kubelet[3950]: E0420 01:44:35.248239    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:44:46 embed-certs-269507 kubelet[3950]: E0420 01:44:46.250754    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:44:56 embed-certs-269507 kubelet[3950]: E0420 01:44:56.282224    3950 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:44:56 embed-certs-269507 kubelet[3950]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:44:56 embed-certs-269507 kubelet[3950]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:44:56 embed-certs-269507 kubelet[3950]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:44:56 embed-certs-269507 kubelet[3950]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:44:58 embed-certs-269507 kubelet[3950]: E0420 01:44:58.248196    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:45:13 embed-certs-269507 kubelet[3950]: E0420 01:45:13.248555    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:45:26 embed-certs-269507 kubelet[3950]: E0420 01:45:26.248651    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:45:38 embed-certs-269507 kubelet[3950]: E0420 01:45:38.249982    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:45:52 embed-certs-269507 kubelet[3950]: E0420 01:45:52.247957    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:45:56 embed-certs-269507 kubelet[3950]: E0420 01:45:56.287223    3950 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:45:56 embed-certs-269507 kubelet[3950]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:45:56 embed-certs-269507 kubelet[3950]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:45:56 embed-certs-269507 kubelet[3950]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:45:56 embed-certs-269507 kubelet[3950]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:46:07 embed-certs-269507 kubelet[3950]: E0420 01:46:07.248702    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	Apr 20 01:46:22 embed-certs-269507 kubelet[3950]: E0420 01:46:22.248699    3950 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jwbst" podUID="4d13a078-f3cd-43c2-8f15-fe5c36445294"
	
	
	==> storage-provisioner [1f2d91a77303a1d1f78754f56e8285673fa3c82912968524268bfb82b6862551] <==
	I0420 01:31:11.949711       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0420 01:31:11.971242       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0420 01:31:11.971567       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0420 01:31:11.980867       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0420 01:31:11.981322       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-269507_3dd4b7cc-e3a1-4847-8c7c-340330c2d74c!
	I0420 01:31:11.982313       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"391cfcae-d01f-4568-b68e-09952097c20d", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-269507_3dd4b7cc-e3a1-4847-8c7c-340330c2d74c became leader
	I0420 01:31:12.082437       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-269507_3dd4b7cc-e3a1-4847-8c7c-340330c2d74c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-269507 -n embed-certs-269507
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-269507 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-jwbst
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-269507 describe pod metrics-server-569cc877fc-jwbst
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-269507 describe pod metrics-server-569cc877fc-jwbst: exit status 1 (79.112917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-jwbst" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-269507 describe pod metrics-server-569cc877fc-jwbst: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (367.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (202.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-338118 -n no-preload-338118
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-20 01:45:12.361411005 +0000 UTC m=+6485.876278382
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-338118 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-338118 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.647µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-338118 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-338118 -n no-preload-338118
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-338118 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-338118 logs -n 25: (1.468938733s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-831611                               | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-172352 | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | disable-driver-mounts-172352                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:17 UTC |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-338118             | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:17 UTC | 20 Apr 24 01:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-907988  | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC | 20 Apr 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC |                     |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-269507            | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC | 20 Apr 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-564860        | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-338118                  | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-907988       | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:30 UTC |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-269507                 | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-564860             | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:44 UTC | 20 Apr 24 01:44 UTC |
	| start   | -p newest-cni-776287 --memory=2200 --alsologtostderr   | newest-cni-776287            | jenkins | v1.33.0 | 20 Apr 24 01:44 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 01:44:32
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 01:44:32.124565  148480 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:44:32.124694  148480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:44:32.124702  148480 out.go:304] Setting ErrFile to fd 2...
	I0420 01:44:32.124707  148480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:44:32.125002  148480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:44:32.125643  148480 out.go:298] Setting JSON to false
	I0420 01:44:32.126778  148480 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":16019,"bootTime":1713561453,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 01:44:32.126842  148480 start.go:139] virtualization: kvm guest
	I0420 01:44:32.129369  148480 out.go:177] * [newest-cni-776287] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 01:44:32.130739  148480 notify.go:220] Checking for updates...
	I0420 01:44:32.130748  148480 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:44:32.132274  148480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:44:32.133689  148480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:44:32.135028  148480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:44:32.136276  148480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 01:44:32.137546  148480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:44:32.139178  148480 config.go:182] Loaded profile config "default-k8s-diff-port-907988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:44:32.139293  148480 config.go:182] Loaded profile config "embed-certs-269507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:44:32.139395  148480 config.go:182] Loaded profile config "no-preload-338118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:44:32.139519  148480 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:44:32.174431  148480 out.go:177] * Using the kvm2 driver based on user configuration
	I0420 01:44:32.175713  148480 start.go:297] selected driver: kvm2
	I0420 01:44:32.175726  148480 start.go:901] validating driver "kvm2" against <nil>
	I0420 01:44:32.175736  148480 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:44:32.176465  148480 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:44:32.176529  148480 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 01:44:32.191764  148480 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 01:44:32.191806  148480 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0420 01:44:32.191834  148480 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0420 01:44:32.192041  148480 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0420 01:44:32.192119  148480 cni.go:84] Creating CNI manager for ""
	I0420 01:44:32.192136  148480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:44:32.192149  148480 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0420 01:44:32.192223  148480 start.go:340] cluster config:
	{Name:newest-cni-776287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-776287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:44:32.192334  148480 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:44:32.194517  148480 out.go:177] * Starting "newest-cni-776287" primary control-plane node in "newest-cni-776287" cluster
	I0420 01:44:32.195889  148480 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:44:32.195929  148480 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0420 01:44:32.195937  148480 cache.go:56] Caching tarball of preloaded images
	I0420 01:44:32.196023  148480 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 01:44:32.196035  148480 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 01:44:32.196134  148480 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/config.json ...
	I0420 01:44:32.196155  148480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/config.json: {Name:mkaabae2655ca7b9c6777c6d0f19a25da3705ecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:44:32.196313  148480 start.go:360] acquireMachinesLock for newest-cni-776287: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:44:32.196349  148480 start.go:364] duration metric: took 18.412µs to acquireMachinesLock for "newest-cni-776287"
	I0420 01:44:32.196378  148480 start.go:93] Provisioning new machine with config: &{Name:newest-cni-776287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-776287
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:44:32.196446  148480 start.go:125] createHost starting for "" (driver="kvm2")
	I0420 01:44:32.198969  148480 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0420 01:44:32.199134  148480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:44:32.199178  148480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:44:32.215146  148480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32953
	I0420 01:44:32.215584  148480 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:44:32.216113  148480 main.go:141] libmachine: Using API Version  1
	I0420 01:44:32.216147  148480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:44:32.216501  148480 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:44:32.216768  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetMachineName
	I0420 01:44:32.216941  148480 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:44:32.217089  148480 start.go:159] libmachine.API.Create for "newest-cni-776287" (driver="kvm2")
	I0420 01:44:32.217111  148480 client.go:168] LocalClient.Create starting
	I0420 01:44:32.217178  148480 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem
	I0420 01:44:32.217216  148480 main.go:141] libmachine: Decoding PEM data...
	I0420 01:44:32.217236  148480 main.go:141] libmachine: Parsing certificate...
	I0420 01:44:32.217304  148480 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem
	I0420 01:44:32.217348  148480 main.go:141] libmachine: Decoding PEM data...
	I0420 01:44:32.217364  148480 main.go:141] libmachine: Parsing certificate...
	I0420 01:44:32.217416  148480 main.go:141] libmachine: Running pre-create checks...
	I0420 01:44:32.217429  148480 main.go:141] libmachine: (newest-cni-776287) Calling .PreCreateCheck
	I0420 01:44:32.217783  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetConfigRaw
	I0420 01:44:32.218192  148480 main.go:141] libmachine: Creating machine...
	I0420 01:44:32.218212  148480 main.go:141] libmachine: (newest-cni-776287) Calling .Create
	I0420 01:44:32.218381  148480 main.go:141] libmachine: (newest-cni-776287) Creating KVM machine...
	I0420 01:44:32.219683  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found existing default KVM network
	I0420 01:44:32.220926  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:32.220763  148503 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:84:54:a7} reservation:<nil>}
	I0420 01:44:32.221759  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:32.221646  148503 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:da:97:5c} reservation:<nil>}
	I0420 01:44:32.222812  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:32.222745  148503 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a5000}
	I0420 01:44:32.222877  148480 main.go:141] libmachine: (newest-cni-776287) DBG | created network xml: 
	I0420 01:44:32.222896  148480 main.go:141] libmachine: (newest-cni-776287) DBG | <network>
	I0420 01:44:32.222908  148480 main.go:141] libmachine: (newest-cni-776287) DBG |   <name>mk-newest-cni-776287</name>
	I0420 01:44:32.222941  148480 main.go:141] libmachine: (newest-cni-776287) DBG |   <dns enable='no'/>
	I0420 01:44:32.222954  148480 main.go:141] libmachine: (newest-cni-776287) DBG |   
	I0420 01:44:32.222964  148480 main.go:141] libmachine: (newest-cni-776287) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0420 01:44:32.222978  148480 main.go:141] libmachine: (newest-cni-776287) DBG |     <dhcp>
	I0420 01:44:32.222997  148480 main.go:141] libmachine: (newest-cni-776287) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0420 01:44:32.223011  148480 main.go:141] libmachine: (newest-cni-776287) DBG |     </dhcp>
	I0420 01:44:32.223026  148480 main.go:141] libmachine: (newest-cni-776287) DBG |   </ip>
	I0420 01:44:32.223039  148480 main.go:141] libmachine: (newest-cni-776287) DBG |   
	I0420 01:44:32.223050  148480 main.go:141] libmachine: (newest-cni-776287) DBG | </network>
	I0420 01:44:32.223061  148480 main.go:141] libmachine: (newest-cni-776287) DBG | 
	I0420 01:44:32.228157  148480 main.go:141] libmachine: (newest-cni-776287) DBG | trying to create private KVM network mk-newest-cni-776287 192.168.61.0/24...
	I0420 01:44:32.301468  148480 main.go:141] libmachine: (newest-cni-776287) DBG | private KVM network mk-newest-cni-776287 192.168.61.0/24 created
	I0420 01:44:32.301581  148480 main.go:141] libmachine: (newest-cni-776287) Setting up store path in /home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287 ...
	I0420 01:44:32.301645  148480 main.go:141] libmachine: (newest-cni-776287) Building disk image from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0420 01:44:32.301681  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:32.301601  148503 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:44:32.301744  148480 main.go:141] libmachine: (newest-cni-776287) Downloading /home/jenkins/minikube-integration/18703-76456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0420 01:44:32.547840  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:32.547729  148503 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa...
	I0420 01:44:32.608411  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:32.608281  148503 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/newest-cni-776287.rawdisk...
	I0420 01:44:32.608447  148480 main.go:141] libmachine: (newest-cni-776287) DBG | Writing magic tar header
	I0420 01:44:32.608467  148480 main.go:141] libmachine: (newest-cni-776287) DBG | Writing SSH key tar header
	I0420 01:44:32.608480  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:32.608412  148503 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287 ...
	I0420 01:44:32.608526  148480 main.go:141] libmachine: (newest-cni-776287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287
	I0420 01:44:32.608556  148480 main.go:141] libmachine: (newest-cni-776287) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287 (perms=drwx------)
	I0420 01:44:32.608578  148480 main.go:141] libmachine: (newest-cni-776287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube/machines
	I0420 01:44:32.608613  148480 main.go:141] libmachine: (newest-cni-776287) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube/machines (perms=drwxr-xr-x)
	I0420 01:44:32.608637  148480 main.go:141] libmachine: (newest-cni-776287) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456/.minikube (perms=drwxr-xr-x)
	I0420 01:44:32.608656  148480 main.go:141] libmachine: (newest-cni-776287) Setting executable bit set on /home/jenkins/minikube-integration/18703-76456 (perms=drwxrwxr-x)
	I0420 01:44:32.608695  148480 main.go:141] libmachine: (newest-cni-776287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:44:32.608725  148480 main.go:141] libmachine: (newest-cni-776287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18703-76456
	I0420 01:44:32.608740  148480 main.go:141] libmachine: (newest-cni-776287) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0420 01:44:32.608758  148480 main.go:141] libmachine: (newest-cni-776287) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0420 01:44:32.608769  148480 main.go:141] libmachine: (newest-cni-776287) Creating domain...
	I0420 01:44:32.608784  148480 main.go:141] libmachine: (newest-cni-776287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0420 01:44:32.608797  148480 main.go:141] libmachine: (newest-cni-776287) DBG | Checking permissions on dir: /home/jenkins
	I0420 01:44:32.608810  148480 main.go:141] libmachine: (newest-cni-776287) DBG | Checking permissions on dir: /home
	I0420 01:44:32.608827  148480 main.go:141] libmachine: (newest-cni-776287) DBG | Skipping /home - not owner
	I0420 01:44:32.610141  148480 main.go:141] libmachine: (newest-cni-776287) define libvirt domain using xml: 
	I0420 01:44:32.610176  148480 main.go:141] libmachine: (newest-cni-776287) <domain type='kvm'>
	I0420 01:44:32.610187  148480 main.go:141] libmachine: (newest-cni-776287)   <name>newest-cni-776287</name>
	I0420 01:44:32.610196  148480 main.go:141] libmachine: (newest-cni-776287)   <memory unit='MiB'>2200</memory>
	I0420 01:44:32.610204  148480 main.go:141] libmachine: (newest-cni-776287)   <vcpu>2</vcpu>
	I0420 01:44:32.610214  148480 main.go:141] libmachine: (newest-cni-776287)   <features>
	I0420 01:44:32.610222  148480 main.go:141] libmachine: (newest-cni-776287)     <acpi/>
	I0420 01:44:32.610237  148480 main.go:141] libmachine: (newest-cni-776287)     <apic/>
	I0420 01:44:32.610249  148480 main.go:141] libmachine: (newest-cni-776287)     <pae/>
	I0420 01:44:32.610263  148480 main.go:141] libmachine: (newest-cni-776287)     
	I0420 01:44:32.610272  148480 main.go:141] libmachine: (newest-cni-776287)   </features>
	I0420 01:44:32.610287  148480 main.go:141] libmachine: (newest-cni-776287)   <cpu mode='host-passthrough'>
	I0420 01:44:32.610296  148480 main.go:141] libmachine: (newest-cni-776287)   
	I0420 01:44:32.610303  148480 main.go:141] libmachine: (newest-cni-776287)   </cpu>
	I0420 01:44:32.610311  148480 main.go:141] libmachine: (newest-cni-776287)   <os>
	I0420 01:44:32.610319  148480 main.go:141] libmachine: (newest-cni-776287)     <type>hvm</type>
	I0420 01:44:32.610331  148480 main.go:141] libmachine: (newest-cni-776287)     <boot dev='cdrom'/>
	I0420 01:44:32.610342  148480 main.go:141] libmachine: (newest-cni-776287)     <boot dev='hd'/>
	I0420 01:44:32.610354  148480 main.go:141] libmachine: (newest-cni-776287)     <bootmenu enable='no'/>
	I0420 01:44:32.610361  148480 main.go:141] libmachine: (newest-cni-776287)   </os>
	I0420 01:44:32.610370  148480 main.go:141] libmachine: (newest-cni-776287)   <devices>
	I0420 01:44:32.610378  148480 main.go:141] libmachine: (newest-cni-776287)     <disk type='file' device='cdrom'>
	I0420 01:44:32.610393  148480 main.go:141] libmachine: (newest-cni-776287)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/boot2docker.iso'/>
	I0420 01:44:32.610407  148480 main.go:141] libmachine: (newest-cni-776287)       <target dev='hdc' bus='scsi'/>
	I0420 01:44:32.610416  148480 main.go:141] libmachine: (newest-cni-776287)       <readonly/>
	I0420 01:44:32.610431  148480 main.go:141] libmachine: (newest-cni-776287)     </disk>
	I0420 01:44:32.610466  148480 main.go:141] libmachine: (newest-cni-776287)     <disk type='file' device='disk'>
	I0420 01:44:32.610490  148480 main.go:141] libmachine: (newest-cni-776287)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0420 01:44:32.610565  148480 main.go:141] libmachine: (newest-cni-776287)       <source file='/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/newest-cni-776287.rawdisk'/>
	I0420 01:44:32.610602  148480 main.go:141] libmachine: (newest-cni-776287)       <target dev='hda' bus='virtio'/>
	I0420 01:44:32.610635  148480 main.go:141] libmachine: (newest-cni-776287)     </disk>
	I0420 01:44:32.610654  148480 main.go:141] libmachine: (newest-cni-776287)     <interface type='network'>
	I0420 01:44:32.610670  148480 main.go:141] libmachine: (newest-cni-776287)       <source network='mk-newest-cni-776287'/>
	I0420 01:44:32.610681  148480 main.go:141] libmachine: (newest-cni-776287)       <model type='virtio'/>
	I0420 01:44:32.610694  148480 main.go:141] libmachine: (newest-cni-776287)     </interface>
	I0420 01:44:32.610702  148480 main.go:141] libmachine: (newest-cni-776287)     <interface type='network'>
	I0420 01:44:32.610712  148480 main.go:141] libmachine: (newest-cni-776287)       <source network='default'/>
	I0420 01:44:32.610722  148480 main.go:141] libmachine: (newest-cni-776287)       <model type='virtio'/>
	I0420 01:44:32.610730  148480 main.go:141] libmachine: (newest-cni-776287)     </interface>
	I0420 01:44:32.610740  148480 main.go:141] libmachine: (newest-cni-776287)     <serial type='pty'>
	I0420 01:44:32.610769  148480 main.go:141] libmachine: (newest-cni-776287)       <target port='0'/>
	I0420 01:44:32.610798  148480 main.go:141] libmachine: (newest-cni-776287)     </serial>
	I0420 01:44:32.610812  148480 main.go:141] libmachine: (newest-cni-776287)     <console type='pty'>
	I0420 01:44:32.610820  148480 main.go:141] libmachine: (newest-cni-776287)       <target type='serial' port='0'/>
	I0420 01:44:32.610832  148480 main.go:141] libmachine: (newest-cni-776287)     </console>
	I0420 01:44:32.610842  148480 main.go:141] libmachine: (newest-cni-776287)     <rng model='virtio'>
	I0420 01:44:32.610855  148480 main.go:141] libmachine: (newest-cni-776287)       <backend model='random'>/dev/random</backend>
	I0420 01:44:32.610864  148480 main.go:141] libmachine: (newest-cni-776287)     </rng>
	I0420 01:44:32.610877  148480 main.go:141] libmachine: (newest-cni-776287)     
	I0420 01:44:32.610888  148480 main.go:141] libmachine: (newest-cni-776287)     
	I0420 01:44:32.610899  148480 main.go:141] libmachine: (newest-cni-776287)   </devices>
	I0420 01:44:32.610914  148480 main.go:141] libmachine: (newest-cni-776287) </domain>
	I0420 01:44:32.610924  148480 main.go:141] libmachine: (newest-cni-776287) 
	I0420 01:44:32.615048  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:78:e4:ed in network default
	I0420 01:44:32.615698  148480 main.go:141] libmachine: (newest-cni-776287) Ensuring networks are active...
	I0420 01:44:32.615723  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:32.616679  148480 main.go:141] libmachine: (newest-cni-776287) Ensuring network default is active
	I0420 01:44:32.617039  148480 main.go:141] libmachine: (newest-cni-776287) Ensuring network mk-newest-cni-776287 is active
	I0420 01:44:32.617827  148480 main.go:141] libmachine: (newest-cni-776287) Getting domain xml...
	I0420 01:44:32.618727  148480 main.go:141] libmachine: (newest-cni-776287) Creating domain...
	I0420 01:44:33.872731  148480 main.go:141] libmachine: (newest-cni-776287) Waiting to get IP...
	I0420 01:44:33.873546  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:33.873960  148480 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:44:33.873977  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:33.873939  148503 retry.go:31] will retry after 196.036763ms: waiting for machine to come up
	I0420 01:44:34.071426  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:34.071955  148480 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:44:34.071990  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:34.071906  148503 retry.go:31] will retry after 382.04135ms: waiting for machine to come up
	I0420 01:44:34.455404  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:34.455841  148480 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:44:34.455872  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:34.455780  148503 retry.go:31] will retry after 412.166365ms: waiting for machine to come up
	I0420 01:44:34.869031  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:34.869613  148480 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:44:34.869645  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:34.869534  148503 retry.go:31] will retry after 574.326662ms: waiting for machine to come up
	I0420 01:44:35.445051  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:35.445565  148480 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:44:35.445595  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:35.445524  148503 retry.go:31] will retry after 664.363468ms: waiting for machine to come up
	I0420 01:44:36.111093  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:36.111704  148480 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:44:36.111736  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:36.111616  148503 retry.go:31] will retry after 650.162403ms: waiting for machine to come up
	I0420 01:44:36.763447  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:36.763922  148480 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:44:36.763972  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:36.763859  148503 retry.go:31] will retry after 750.659633ms: waiting for machine to come up
	I0420 01:44:37.518023  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:37.518494  148480 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:44:37.518525  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:37.518442  148503 retry.go:31] will retry after 967.448288ms: waiting for machine to come up
	I0420 01:44:38.487589  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:38.488237  148480 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:44:38.488267  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:38.488175  148503 retry.go:31] will retry after 1.859811403s: waiting for machine to come up
	I0420 01:44:40.349196  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:40.349737  148480 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:44:40.349789  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:40.349685  148503 retry.go:31] will retry after 2.20354196s: waiting for machine to come up
	I0420 01:44:42.554802  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:42.555308  148480 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:44:42.555337  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:42.555248  148503 retry.go:31] will retry after 2.350650038s: waiting for machine to come up
	I0420 01:44:44.907245  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:44.907719  148480 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:44:44.907750  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:44.907677  148503 retry.go:31] will retry after 3.520642662s: waiting for machine to come up
	I0420 01:44:48.430390  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:48.430812  148480 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:44:48.430843  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:48.430753  148503 retry.go:31] will retry after 2.747729177s: waiting for machine to come up
	I0420 01:44:51.181492  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:51.181912  148480 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find current IP address of domain newest-cni-776287 in network mk-newest-cni-776287
	I0420 01:44:51.181935  148480 main.go:141] libmachine: (newest-cni-776287) DBG | I0420 01:44:51.181894  148503 retry.go:31] will retry after 5.13174896s: waiting for machine to come up
	I0420 01:44:56.318627  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:56.319174  148480 main.go:141] libmachine: (newest-cni-776287) Found IP for machine: 192.168.61.191
	I0420 01:44:56.319196  148480 main.go:141] libmachine: (newest-cni-776287) Reserving static IP address...
	I0420 01:44:56.319210  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has current primary IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:56.319535  148480 main.go:141] libmachine: (newest-cni-776287) DBG | unable to find host DHCP lease matching {name: "newest-cni-776287", mac: "52:54:00:e3:cd:b1", ip: "192.168.61.191"} in network mk-newest-cni-776287
	I0420 01:44:56.396298  148480 main.go:141] libmachine: (newest-cni-776287) DBG | Getting to WaitForSSH function...
	I0420 01:44:56.396325  148480 main.go:141] libmachine: (newest-cni-776287) Reserved static IP address: 192.168.61.191
	I0420 01:44:56.396343  148480 main.go:141] libmachine: (newest-cni-776287) Waiting for SSH to be available...
	I0420 01:44:56.399131  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:56.399582  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:44:48 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:44:56.399612  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:56.399773  148480 main.go:141] libmachine: (newest-cni-776287) DBG | Using SSH client type: external
	I0420 01:44:56.399800  148480 main.go:141] libmachine: (newest-cni-776287) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa (-rw-------)
	I0420 01:44:56.399835  148480 main.go:141] libmachine: (newest-cni-776287) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.191 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:44:56.399846  148480 main.go:141] libmachine: (newest-cni-776287) DBG | About to run SSH command:
	I0420 01:44:56.399887  148480 main.go:141] libmachine: (newest-cni-776287) DBG | exit 0
	I0420 01:44:56.530552  148480 main.go:141] libmachine: (newest-cni-776287) DBG | SSH cmd err, output: <nil>: 
	I0420 01:44:56.530846  148480 main.go:141] libmachine: (newest-cni-776287) KVM machine creation complete!
	I0420 01:44:56.531253  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetConfigRaw
	I0420 01:44:56.531941  148480 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:44:56.532177  148480 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:44:56.532412  148480 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0420 01:44:56.532432  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetState
	I0420 01:44:56.533981  148480 main.go:141] libmachine: Detecting operating system of created instance...
	I0420 01:44:56.533996  148480 main.go:141] libmachine: Waiting for SSH to be available...
	I0420 01:44:56.534002  148480 main.go:141] libmachine: Getting to WaitForSSH function...
	I0420 01:44:56.534009  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:44:56.536652  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:56.537116  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:44:48 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:44:56.537156  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:56.537263  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:44:56.537488  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:44:56.537649  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:44:56.537871  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:44:56.538082  148480 main.go:141] libmachine: Using SSH client type: native
	I0420 01:44:56.538268  148480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I0420 01:44:56.538279  148480 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0420 01:44:56.653146  148480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:44:56.653172  148480 main.go:141] libmachine: Detecting the provisioner...
	I0420 01:44:56.653184  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:44:56.656057  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:56.656486  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:44:48 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:44:56.656541  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:56.656767  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:44:56.656945  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:44:56.657140  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:44:56.657260  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:44:56.657473  148480 main.go:141] libmachine: Using SSH client type: native
	I0420 01:44:56.657647  148480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I0420 01:44:56.657663  148480 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0420 01:44:56.774797  148480 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0420 01:44:56.774872  148480 main.go:141] libmachine: found compatible host: buildroot
	I0420 01:44:56.774888  148480 main.go:141] libmachine: Provisioning with buildroot...
	I0420 01:44:56.774901  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetMachineName
	I0420 01:44:56.775190  148480 buildroot.go:166] provisioning hostname "newest-cni-776287"
	I0420 01:44:56.775211  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetMachineName
	I0420 01:44:56.775425  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:44:56.778375  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:56.778768  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:44:48 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:44:56.778809  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:56.778933  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:44:56.779156  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:44:56.779386  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:44:56.779525  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:44:56.779676  148480 main.go:141] libmachine: Using SSH client type: native
	I0420 01:44:56.779868  148480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I0420 01:44:56.779882  148480 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-776287 && echo "newest-cni-776287" | sudo tee /etc/hostname
	I0420 01:44:56.911212  148480 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-776287
	
	I0420 01:44:56.911240  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:44:56.913833  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:56.914192  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:44:48 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:44:56.914215  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:56.914410  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:44:56.914623  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:44:56.914814  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:44:56.915000  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:44:56.915189  148480 main.go:141] libmachine: Using SSH client type: native
	I0420 01:44:56.915386  148480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I0420 01:44:56.915411  148480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-776287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-776287/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-776287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:44:57.043326  148480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:44:57.043364  148480 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:44:57.043413  148480 buildroot.go:174] setting up certificates
	I0420 01:44:57.043426  148480 provision.go:84] configureAuth start
	I0420 01:44:57.043446  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetMachineName
	I0420 01:44:57.043725  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetIP
	I0420 01:44:57.046360  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:57.046769  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:44:48 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:44:57.046801  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:57.047000  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:44:57.049593  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:57.050020  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:44:48 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:44:57.050046  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:57.050222  148480 provision.go:143] copyHostCerts
	I0420 01:44:57.050287  148480 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:44:57.050309  148480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:44:57.050424  148480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:44:57.050517  148480 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:44:57.050526  148480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:44:57.050553  148480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:44:57.050617  148480 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:44:57.050624  148480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:44:57.050645  148480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:44:57.050696  148480 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.newest-cni-776287 san=[127.0.0.1 192.168.61.191 localhost minikube newest-cni-776287]
	I0420 01:44:57.352939  148480 provision.go:177] copyRemoteCerts
	I0420 01:44:57.353016  148480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:44:57.353051  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:44:57.356177  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:57.356560  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:44:48 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:44:57.356593  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:57.356811  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:44:57.357017  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:44:57.357216  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:44:57.357385  148480 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:44:57.445978  148480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:44:57.473194  148480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0420 01:44:57.501031  148480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:44:57.528575  148480 provision.go:87] duration metric: took 485.132766ms to configureAuth
	I0420 01:44:57.528602  148480 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:44:57.528788  148480 config.go:182] Loaded profile config "newest-cni-776287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:44:57.528896  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:44:57.531674  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:57.532097  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:44:48 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:44:57.532130  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:57.532291  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:44:57.532505  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:44:57.532668  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:44:57.532814  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:44:57.532999  148480 main.go:141] libmachine: Using SSH client type: native
	I0420 01:44:57.533157  148480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I0420 01:44:57.533171  148480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:44:57.848925  148480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:44:57.848956  148480 main.go:141] libmachine: Checking connection to Docker...
	I0420 01:44:57.848965  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetURL
	I0420 01:44:57.850357  148480 main.go:141] libmachine: (newest-cni-776287) DBG | Using libvirt version 6000000
	I0420 01:44:57.852342  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:57.852716  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:44:48 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:44:57.852754  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:57.852952  148480 main.go:141] libmachine: Docker is up and running!
	I0420 01:44:57.852969  148480 main.go:141] libmachine: Reticulating splines...
	I0420 01:44:57.852978  148480 client.go:171] duration metric: took 25.635840744s to LocalClient.Create
	I0420 01:44:57.853009  148480 start.go:167] duration metric: took 25.635920397s to libmachine.API.Create "newest-cni-776287"
	I0420 01:44:57.853021  148480 start.go:293] postStartSetup for "newest-cni-776287" (driver="kvm2")
	I0420 01:44:57.853035  148480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:44:57.853059  148480 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:44:57.853401  148480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:44:57.853429  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:44:57.855501  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:57.855813  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:44:48 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:44:57.855842  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:57.855933  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:44:57.856127  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:44:57.856292  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:44:57.856463  148480 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:44:57.944200  148480 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:44:57.949653  148480 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:44:57.949679  148480 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:44:57.949757  148480 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:44:57.949843  148480 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:44:57.949936  148480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:44:57.959821  148480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:44:57.988478  148480 start.go:296] duration metric: took 135.443738ms for postStartSetup
	I0420 01:44:57.988520  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetConfigRaw
	I0420 01:44:57.989103  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetIP
	I0420 01:44:57.991814  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:57.992218  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:44:48 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:44:57.992266  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:57.992494  148480 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/config.json ...
	I0420 01:44:57.992717  148480 start.go:128] duration metric: took 25.796260622s to createHost
	I0420 01:44:57.992746  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:44:57.995118  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:57.995535  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:44:48 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:44:57.995560  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:57.995660  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:44:57.995836  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:44:57.995996  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:44:57.996148  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:44:57.996324  148480 main.go:141] libmachine: Using SSH client type: native
	I0420 01:44:57.996520  148480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I0420 01:44:57.996535  148480 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:44:58.110222  148480 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713577498.092267961
	
	I0420 01:44:58.110242  148480 fix.go:216] guest clock: 1713577498.092267961
	I0420 01:44:58.110249  148480 fix.go:229] Guest: 2024-04-20 01:44:58.092267961 +0000 UTC Remote: 2024-04-20 01:44:57.992731217 +0000 UTC m=+25.917426730 (delta=99.536744ms)
	I0420 01:44:58.110267  148480 fix.go:200] guest clock delta is within tolerance: 99.536744ms
	I0420 01:44:58.110272  148480 start.go:83] releasing machines lock for "newest-cni-776287", held for 25.913916504s
	I0420 01:44:58.110289  148480 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:44:58.110565  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetIP
	I0420 01:44:58.113286  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:58.113645  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:44:48 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:44:58.113685  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:58.113862  148480 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:44:58.114472  148480 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:44:58.114659  148480 main.go:141] libmachine: (newest-cni-776287) Calling .DriverName
	I0420 01:44:58.114739  148480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:44:58.114780  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:44:58.115064  148480 ssh_runner.go:195] Run: cat /version.json
	I0420 01:44:58.115114  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHHostname
	I0420 01:44:58.117807  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:58.118006  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:58.118206  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:44:48 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:44:58.118241  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:58.118453  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:44:48 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:44:58.118478  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:58.118496  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:44:58.118754  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:44:58.118777  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHPort
	I0420 01:44:58.118977  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHKeyPath
	I0420 01:44:58.118982  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:44:58.119155  148480 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:44:58.119224  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetSSHUsername
	I0420 01:44:58.119345  148480 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/newest-cni-776287/id_rsa Username:docker}
	I0420 01:44:58.224359  148480 ssh_runner.go:195] Run: systemctl --version
	I0420 01:44:58.231447  148480 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:44:58.392935  148480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:44:58.400728  148480 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:44:58.400804  148480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:44:58.419148  148480 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:44:58.419172  148480 start.go:494] detecting cgroup driver to use...
	I0420 01:44:58.419222  148480 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:44:58.439674  148480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:44:58.455467  148480 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:44:58.455522  148480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:44:58.470646  148480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:44:58.485954  148480 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:44:58.608745  148480 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:44:58.782381  148480 docker.go:233] disabling docker service ...
	I0420 01:44:58.782457  148480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:44:58.800603  148480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:44:58.815933  148480 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:44:58.953263  148480 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:44:59.093910  148480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:44:59.110279  148480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:44:59.132967  148480 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:44:59.133034  148480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:44:59.146698  148480 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:44:59.146760  148480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:44:59.159638  148480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:44:59.172494  148480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:44:59.186201  148480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:44:59.200895  148480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:44:59.214732  148480 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:44:59.235549  148480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:44:59.248122  148480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:44:59.259560  148480 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:44:59.259607  148480 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:44:59.274936  148480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:44:59.286488  148480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:44:59.418694  148480 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:44:59.574155  148480 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:44:59.574262  148480 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:44:59.579677  148480 start.go:562] Will wait 60s for crictl version
	I0420 01:44:59.579734  148480 ssh_runner.go:195] Run: which crictl
	I0420 01:44:59.584633  148480 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:44:59.626705  148480 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:44:59.626774  148480 ssh_runner.go:195] Run: crio --version
	I0420 01:44:59.662936  148480 ssh_runner.go:195] Run: crio --version
	I0420 01:44:59.704982  148480 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:44:59.706511  148480 main.go:141] libmachine: (newest-cni-776287) Calling .GetIP
	I0420 01:44:59.709287  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:59.709682  148480 main.go:141] libmachine: (newest-cni-776287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:cd:b1", ip: ""} in network mk-newest-cni-776287: {Iface:virbr2 ExpiryTime:2024-04-20 02:44:48 +0000 UTC Type:0 Mac:52:54:00:e3:cd:b1 Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:newest-cni-776287 Clientid:01:52:54:00:e3:cd:b1}
	I0420 01:44:59.709711  148480 main.go:141] libmachine: (newest-cni-776287) DBG | domain newest-cni-776287 has defined IP address 192.168.61.191 and MAC address 52:54:00:e3:cd:b1 in network mk-newest-cni-776287
	I0420 01:44:59.709982  148480 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0420 01:44:59.714980  148480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:44:59.733275  148480 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0420 01:44:59.734735  148480 kubeadm.go:877] updating cluster {Name:newest-cni-776287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-776287 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.191 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:44:59.734854  148480 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:44:59.734909  148480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:44:59.779395  148480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:44:59.779484  148480 ssh_runner.go:195] Run: which lz4
	I0420 01:44:59.784703  148480 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:44:59.790422  148480 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:44:59.790457  148480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:45:01.533866  148480 crio.go:462] duration metric: took 1.749196489s to copy over tarball
	I0420 01:45:01.533974  148480 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:45:04.072896  148480 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.538870644s)
	I0420 01:45:04.072934  148480 crio.go:469] duration metric: took 2.539033452s to extract the tarball
	I0420 01:45:04.072945  148480 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:45:04.113719  148480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:45:04.169618  148480 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:45:04.169651  148480 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:45:04.169662  148480 kubeadm.go:928] updating node { 192.168.61.191 8443 v1.30.0 crio true true} ...
	I0420 01:45:04.169806  148480 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-776287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:newest-cni-776287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:45:04.169890  148480 ssh_runner.go:195] Run: crio config
	I0420 01:45:04.223401  148480 cni.go:84] Creating CNI manager for ""
	I0420 01:45:04.223433  148480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:45:04.223451  148480 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0420 01:45:04.223484  148480 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.191 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-776287 NodeName:newest-cni-776287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.61.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:45:04.223651  148480 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-776287"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.191
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:45:04.223716  148480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:45:04.235790  148480 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:45:04.235862  148480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:45:04.246785  148480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0420 01:45:04.268137  148480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:45:04.286564  148480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0420 01:45:04.306268  148480 ssh_runner.go:195] Run: grep 192.168.61.191	control-plane.minikube.internal$ /etc/hosts
	I0420 01:45:04.310620  148480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.191	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:45:04.327301  148480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:45:04.473142  148480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:45:04.496005  148480 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287 for IP: 192.168.61.191
	I0420 01:45:04.496031  148480 certs.go:194] generating shared ca certs ...
	I0420 01:45:04.496055  148480 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:45:04.496226  148480 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:45:04.496292  148480 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:45:04.496307  148480 certs.go:256] generating profile certs ...
	I0420 01:45:04.496404  148480 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/client.key
	I0420 01:45:04.496421  148480 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/client.crt with IP's: []
	I0420 01:45:04.864644  148480 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/client.crt ...
	I0420 01:45:04.864688  148480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/client.crt: {Name:mk993c8319564283e0e1439835c7c26ade892dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:45:04.864867  148480 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/client.key ...
	I0420 01:45:04.864888  148480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/client.key: {Name:mke4f9376e4f17606f2712f4b6079e0f38ca6eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:45:04.864968  148480 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.key.e52dbc46
	I0420 01:45:04.864983  148480 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.crt.e52dbc46 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.191]
	I0420 01:45:04.949946  148480 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.crt.e52dbc46 ...
	I0420 01:45:04.949977  148480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.crt.e52dbc46: {Name:mk8c3c3ce34cc3fc161ea73d983b2ae4c709cbd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:45:04.950119  148480 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.key.e52dbc46 ...
	I0420 01:45:04.950132  148480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.key.e52dbc46: {Name:mk037c69e699ec1382bab229d30b6f3897bff273 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:45:04.950198  148480 certs.go:381] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.crt.e52dbc46 -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.crt
	I0420 01:45:04.950265  148480 certs.go:385] copying /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.key.e52dbc46 -> /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.key
	I0420 01:45:04.950353  148480 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/proxy-client.key
	I0420 01:45:04.950379  148480 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/proxy-client.crt with IP's: []
	I0420 01:45:05.400267  148480 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/proxy-client.crt ...
	I0420 01:45:05.400314  148480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/proxy-client.crt: {Name:mk2fa19c7232ba65081c2803f2716ef8b4f4e039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:45:05.400521  148480 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/proxy-client.key ...
	I0420 01:45:05.400548  148480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/proxy-client.key: {Name:mkcb05d01bf3330ba1d431d29381abfdcd35e91b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:45:05.400788  148480 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:45:05.400831  148480 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:45:05.400841  148480 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:45:05.400869  148480 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:45:05.400901  148480 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:45:05.400932  148480 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:45:05.400986  148480 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:45:05.401674  148480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:45:05.459735  148480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:45:05.490040  148480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:45:05.520348  148480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:45:05.546862  148480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0420 01:45:05.573915  148480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0420 01:45:05.602849  148480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:45:05.630832  148480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/newest-cni-776287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:45:05.659868  148480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:45:05.690236  148480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:45:05.718562  148480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:45:05.744616  148480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:45:05.766258  148480 ssh_runner.go:195] Run: openssl version
	I0420 01:45:05.773148  148480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:45:05.786950  148480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:45:05.792421  148480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:45:05.792505  148480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:45:05.799276  148480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:45:05.814143  148480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:45:05.828015  148480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:45:05.833426  148480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:45:05.833492  148480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:45:05.840134  148480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:45:05.853709  148480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:45:05.867677  148480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:45:05.872822  148480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:45:05.872865  148480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:45:05.879436  148480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:45:05.893950  148480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:45:05.898905  148480 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0420 01:45:05.898999  148480 kubeadm.go:391] StartCluster: {Name:newest-cni-776287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-776287 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.191 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:45:05.899074  148480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:45:05.899117  148480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:45:05.944553  148480 cri.go:89] found id: ""
	I0420 01:45:05.944623  148480 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0420 01:45:05.957501  148480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:45:05.970041  148480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:45:05.981577  148480 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:45:05.981595  148480 kubeadm.go:156] found existing configuration files:
	
	I0420 01:45:05.981686  148480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:45:05.992986  148480 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:45:05.993076  148480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:45:06.004486  148480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:45:06.016603  148480 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:45:06.016664  148480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:45:06.028816  148480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:45:06.040273  148480 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:45:06.040340  148480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:45:06.052483  148480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:45:06.068212  148480 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:45:06.068284  148480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:45:06.080479  148480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:45:06.218505  148480 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:45:06.218678  148480 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:45:06.367493  148480 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:45:06.367613  148480 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:45:06.367804  148480 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:45:06.602573  148480 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:45:06.660249  148480 out.go:204]   - Generating certificates and keys ...
	I0420 01:45:06.660383  148480 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:45:06.660483  148480 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:45:06.752237  148480 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0420 01:45:06.840009  148480 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0420 01:45:06.994697  148480 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0420 01:45:07.231492  148480 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0420 01:45:07.311303  148480 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0420 01:45:07.311469  148480 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-776287] and IPs [192.168.61.191 127.0.0.1 ::1]
	I0420 01:45:07.541945  148480 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0420 01:45:07.542145  148480 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-776287] and IPs [192.168.61.191 127.0.0.1 ::1]
	I0420 01:45:07.701167  148480 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0420 01:45:08.028712  148480 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0420 01:45:08.202209  148480 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0420 01:45:08.202294  148480 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:45:08.265594  148480 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:45:08.673989  148480 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:45:09.030411  148480 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:45:09.206625  148480 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:45:09.384091  148480 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:45:09.384777  148480 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:45:09.389203  148480 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:45:09.391091  148480 out.go:204]   - Booting up control plane ...
	I0420 01:45:09.391228  148480 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:45:09.391364  148480 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:45:09.391473  148480 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:45:09.409646  148480 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:45:09.411354  148480 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:45:09.411454  148480 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:45:09.566015  148480 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:45:09.566094  148480 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:45:10.073425  148480 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 508.188183ms
	I0420 01:45:10.073572  148480 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	
	
	==> CRI-O <==
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.117238188Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577513117216119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=caab860c-9e62-4cf3-86ea-ba1d7bccb3ff name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.117950433Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f32ab44-a735-4eb7-90ef-41db3951fa34 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.118035206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f32ab44-a735-4eb7-90ef-41db3951fa34 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.118464692Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3767b5a85864a238c42e3cc300293883e42c5502260fcced065898a395927031,PodSandboxId:b33d1aec626eb1433ac85d191075dd66073501f5a366a78ec8bd16694e81cfa8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576767181067027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c12418-805f-4923-b7ab-4fa0fe07ec9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6f824527,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14222cfb746124f3cb63ed8bd73f1607212e523f11521e35a397f013eb676eb3,PodSandboxId:27853fa3c62eb7d341e02dd40a599b437d79561b0058a63303d3665b540c2b94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576766464108947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lhnxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0fb3119-abcb-4646-9aae-a54438a76adf,},Annotations:map[string]string{io.kubernetes.container.hash: 744d27ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0820d3d9e22e9b8a6a6c9b1563a916c12802fa5096ba848dbcac19f37092b2d,PodSandboxId:09e00fbbb48fd2831199a1546285d81720184d589490604df33575ce42b0ea88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576766317384440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8jvsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8
3784a0-6942-4906-ba66-76d7fa25dc04,},Annotations:map[string]string{io.kubernetes.container.hash: 5c9a26e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444f48e53751865d866ebde038d4b741a5f2c1e9a188f0835a9fb989c08122e6,PodSandboxId:89ed92966bdfe66e648259c571784d3f37474b077aba684a806c60d6f3951885,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713576765484422380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f57d9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54252f52-9bb1-48a2-98e1-980f40fa727d,},Annotations:map[string]string{io.kubernetes.container.hash: 60963711,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c43ca20df1029f80954bdaaf50aa37d7544ef1606039b3384de429587e6fdab,PodSandboxId:781d22b357d6f83fc472b8acea335f9169bc1366ac060a3e41e9644f1a2e9689,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576746081900568,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57d4d800db9704a575894ed300277d2,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41521be8a42d149366098d2a485d866fab1434a9b691ed6fc108fd46dde574fb,PodSandboxId:54a949b714e584cc49aae201c37a1b6d3f813aca2883b253b98d9d61e308020d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576746089379598,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c9d8b697029f4835cac7bf45661ef0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258f4b3a17cd33aaba1dc9bf1fb8fd978853aa0ca37193b2f22e68a87e36ac26,PodSandboxId:ee4c8021ef4d8a2e0db2561c1241e85501868ab531431f700c892d7c136bc69f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576746118139827,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d058398ee22df8b2543ed012544bc525,},Annotations:map[string]string{io.kubernetes.container.hash: fbb975a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3cefb8dc166047a93d63cc578aa1f38247d79417c2bf0a35d04fabebd1c159d,PodSandboxId:d91316e86d41c4e8fde7213da8fb6c9a78cd9b5680554264ed599da314383eb0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576746054604644,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed7f8a123467f5638e826b4e70ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 122cf7f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f32ab44-a735-4eb7-90ef-41db3951fa34 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.169149733Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9e3f6d9-570c-452b-8e8f-012281d7244c name=/runtime.v1.RuntimeService/Version
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.169251563Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9e3f6d9-570c-452b-8e8f-012281d7244c name=/runtime.v1.RuntimeService/Version
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.171630386Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=746a3d02-d42f-46f7-9709-f727799aae66 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.172038336Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577513172006223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=746a3d02-d42f-46f7-9709-f727799aae66 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.172819896Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d27386fd-21c0-4e04-a2fa-1393008f47ee name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.172902381Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d27386fd-21c0-4e04-a2fa-1393008f47ee name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.173094863Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3767b5a85864a238c42e3cc300293883e42c5502260fcced065898a395927031,PodSandboxId:b33d1aec626eb1433ac85d191075dd66073501f5a366a78ec8bd16694e81cfa8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576767181067027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c12418-805f-4923-b7ab-4fa0fe07ec9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6f824527,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14222cfb746124f3cb63ed8bd73f1607212e523f11521e35a397f013eb676eb3,PodSandboxId:27853fa3c62eb7d341e02dd40a599b437d79561b0058a63303d3665b540c2b94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576766464108947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lhnxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0fb3119-abcb-4646-9aae-a54438a76adf,},Annotations:map[string]string{io.kubernetes.container.hash: 744d27ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0820d3d9e22e9b8a6a6c9b1563a916c12802fa5096ba848dbcac19f37092b2d,PodSandboxId:09e00fbbb48fd2831199a1546285d81720184d589490604df33575ce42b0ea88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576766317384440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8jvsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8
3784a0-6942-4906-ba66-76d7fa25dc04,},Annotations:map[string]string{io.kubernetes.container.hash: 5c9a26e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444f48e53751865d866ebde038d4b741a5f2c1e9a188f0835a9fb989c08122e6,PodSandboxId:89ed92966bdfe66e648259c571784d3f37474b077aba684a806c60d6f3951885,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713576765484422380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f57d9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54252f52-9bb1-48a2-98e1-980f40fa727d,},Annotations:map[string]string{io.kubernetes.container.hash: 60963711,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c43ca20df1029f80954bdaaf50aa37d7544ef1606039b3384de429587e6fdab,PodSandboxId:781d22b357d6f83fc472b8acea335f9169bc1366ac060a3e41e9644f1a2e9689,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576746081900568,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57d4d800db9704a575894ed300277d2,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41521be8a42d149366098d2a485d866fab1434a9b691ed6fc108fd46dde574fb,PodSandboxId:54a949b714e584cc49aae201c37a1b6d3f813aca2883b253b98d9d61e308020d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576746089379598,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c9d8b697029f4835cac7bf45661ef0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258f4b3a17cd33aaba1dc9bf1fb8fd978853aa0ca37193b2f22e68a87e36ac26,PodSandboxId:ee4c8021ef4d8a2e0db2561c1241e85501868ab531431f700c892d7c136bc69f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576746118139827,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d058398ee22df8b2543ed012544bc525,},Annotations:map[string]string{io.kubernetes.container.hash: fbb975a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3cefb8dc166047a93d63cc578aa1f38247d79417c2bf0a35d04fabebd1c159d,PodSandboxId:d91316e86d41c4e8fde7213da8fb6c9a78cd9b5680554264ed599da314383eb0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576746054604644,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed7f8a123467f5638e826b4e70ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 122cf7f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d27386fd-21c0-4e04-a2fa-1393008f47ee name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.222052307Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=806d3b9c-fc86-4993-9f0c-96c46826a8a4 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.222372712Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=806d3b9c-fc86-4993-9f0c-96c46826a8a4 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.223750901Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4f5bc448-61f2-41a6-a91d-3b3bae1f045a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.224084781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577513224063822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f5bc448-61f2-41a6-a91d-3b3bae1f045a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.224856922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc7dce7b-69fe-4583-aa86-680ffe3fd558 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.224936605Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc7dce7b-69fe-4583-aa86-680ffe3fd558 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.226866824Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3767b5a85864a238c42e3cc300293883e42c5502260fcced065898a395927031,PodSandboxId:b33d1aec626eb1433ac85d191075dd66073501f5a366a78ec8bd16694e81cfa8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576767181067027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c12418-805f-4923-b7ab-4fa0fe07ec9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6f824527,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14222cfb746124f3cb63ed8bd73f1607212e523f11521e35a397f013eb676eb3,PodSandboxId:27853fa3c62eb7d341e02dd40a599b437d79561b0058a63303d3665b540c2b94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576766464108947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lhnxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0fb3119-abcb-4646-9aae-a54438a76adf,},Annotations:map[string]string{io.kubernetes.container.hash: 744d27ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0820d3d9e22e9b8a6a6c9b1563a916c12802fa5096ba848dbcac19f37092b2d,PodSandboxId:09e00fbbb48fd2831199a1546285d81720184d589490604df33575ce42b0ea88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576766317384440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8jvsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8
3784a0-6942-4906-ba66-76d7fa25dc04,},Annotations:map[string]string{io.kubernetes.container.hash: 5c9a26e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444f48e53751865d866ebde038d4b741a5f2c1e9a188f0835a9fb989c08122e6,PodSandboxId:89ed92966bdfe66e648259c571784d3f37474b077aba684a806c60d6f3951885,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713576765484422380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f57d9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54252f52-9bb1-48a2-98e1-980f40fa727d,},Annotations:map[string]string{io.kubernetes.container.hash: 60963711,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c43ca20df1029f80954bdaaf50aa37d7544ef1606039b3384de429587e6fdab,PodSandboxId:781d22b357d6f83fc472b8acea335f9169bc1366ac060a3e41e9644f1a2e9689,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576746081900568,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57d4d800db9704a575894ed300277d2,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41521be8a42d149366098d2a485d866fab1434a9b691ed6fc108fd46dde574fb,PodSandboxId:54a949b714e584cc49aae201c37a1b6d3f813aca2883b253b98d9d61e308020d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576746089379598,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c9d8b697029f4835cac7bf45661ef0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258f4b3a17cd33aaba1dc9bf1fb8fd978853aa0ca37193b2f22e68a87e36ac26,PodSandboxId:ee4c8021ef4d8a2e0db2561c1241e85501868ab531431f700c892d7c136bc69f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576746118139827,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d058398ee22df8b2543ed012544bc525,},Annotations:map[string]string{io.kubernetes.container.hash: fbb975a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3cefb8dc166047a93d63cc578aa1f38247d79417c2bf0a35d04fabebd1c159d,PodSandboxId:d91316e86d41c4e8fde7213da8fb6c9a78cd9b5680554264ed599da314383eb0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576746054604644,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed7f8a123467f5638e826b4e70ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 122cf7f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc7dce7b-69fe-4583-aa86-680ffe3fd558 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.273975120Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=702a5896-0e02-421a-87b8-3a3797f420a0 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.274046674Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=702a5896-0e02-421a-87b8-3a3797f420a0 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.275483261Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99338fb3-ccd6-4328-9d51-ecd2dd386e80 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.275865472Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577513275843421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99338fb3-ccd6-4328-9d51-ecd2dd386e80 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.276598123Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1c151be-07e3-4cb8-820b-8fe3a547e079 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.276649173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1c151be-07e3-4cb8-820b-8fe3a547e079 name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:45:13 no-preload-338118 crio[724]: time="2024-04-20 01:45:13.276840822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3767b5a85864a238c42e3cc300293883e42c5502260fcced065898a395927031,PodSandboxId:b33d1aec626eb1433ac85d191075dd66073501f5a366a78ec8bd16694e81cfa8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713576767181067027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c12418-805f-4923-b7ab-4fa0fe07ec9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6f824527,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14222cfb746124f3cb63ed8bd73f1607212e523f11521e35a397f013eb676eb3,PodSandboxId:27853fa3c62eb7d341e02dd40a599b437d79561b0058a63303d3665b540c2b94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576766464108947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lhnxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0fb3119-abcb-4646-9aae-a54438a76adf,},Annotations:map[string]string{io.kubernetes.container.hash: 744d27ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0820d3d9e22e9b8a6a6c9b1563a916c12802fa5096ba848dbcac19f37092b2d,PodSandboxId:09e00fbbb48fd2831199a1546285d81720184d589490604df33575ce42b0ea88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713576766317384440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8jvsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8
3784a0-6942-4906-ba66-76d7fa25dc04,},Annotations:map[string]string{io.kubernetes.container.hash: 5c9a26e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444f48e53751865d866ebde038d4b741a5f2c1e9a188f0835a9fb989c08122e6,PodSandboxId:89ed92966bdfe66e648259c571784d3f37474b077aba684a806c60d6f3951885,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713576765484422380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f57d9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54252f52-9bb1-48a2-98e1-980f40fa727d,},Annotations:map[string]string{io.kubernetes.container.hash: 60963711,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c43ca20df1029f80954bdaaf50aa37d7544ef1606039b3384de429587e6fdab,PodSandboxId:781d22b357d6f83fc472b8acea335f9169bc1366ac060a3e41e9644f1a2e9689,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713576746081900568,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57d4d800db9704a575894ed300277d2,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41521be8a42d149366098d2a485d866fab1434a9b691ed6fc108fd46dde574fb,PodSandboxId:54a949b714e584cc49aae201c37a1b6d3f813aca2883b253b98d9d61e308020d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713576746089379598,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c9d8b697029f4835cac7bf45661ef0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258f4b3a17cd33aaba1dc9bf1fb8fd978853aa0ca37193b2f22e68a87e36ac26,PodSandboxId:ee4c8021ef4d8a2e0db2561c1241e85501868ab531431f700c892d7c136bc69f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713576746118139827,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d058398ee22df8b2543ed012544bc525,},Annotations:map[string]string{io.kubernetes.container.hash: fbb975a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3cefb8dc166047a93d63cc578aa1f38247d79417c2bf0a35d04fabebd1c159d,PodSandboxId:d91316e86d41c4e8fde7213da8fb6c9a78cd9b5680554264ed599da314383eb0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713576746054604644,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-338118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed7f8a123467f5638e826b4e70ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 122cf7f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1c151be-07e3-4cb8-820b-8fe3a547e079 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3767b5a85864a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Running             storage-provisioner       0                   b33d1aec626eb       storage-provisioner
	14222cfb74612       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   12 minutes ago      Running             coredns                   0                   27853fa3c62eb       coredns-7db6d8ff4d-lhnxg
	b0820d3d9e22e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   12 minutes ago      Running             coredns                   0                   09e00fbbb48fd       coredns-7db6d8ff4d-8jvsz
	444f48e537518       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   12 minutes ago      Running             kube-proxy                0                   89ed92966bdfe       kube-proxy-f57d9
	258f4b3a17cd3       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   12 minutes ago      Running             kube-apiserver            3                   ee4c8021ef4d8       kube-apiserver-no-preload-338118
	41521be8a42d1       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   12 minutes ago      Running             kube-controller-manager   3                   54a949b714e58       kube-controller-manager-no-preload-338118
	0c43ca20df102       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   12 minutes ago      Running             kube-scheduler            2                   781d22b357d6f       kube-scheduler-no-preload-338118
	a3cefb8dc1660       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   12 minutes ago      Running             etcd                      2                   d91316e86d41c       etcd-no-preload-338118
	
	
	==> coredns [14222cfb746124f3cb63ed8bd73f1607212e523f11521e35a397f013eb676eb3] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b0820d3d9e22e9b8a6a6c9b1563a916c12802fa5096ba848dbcac19f37092b2d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-338118
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-338118
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=no-preload-338118
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T01_32_32_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 01:32:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-338118
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:45:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 01:43:03 +0000   Sat, 20 Apr 2024 01:32:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 01:43:03 +0000   Sat, 20 Apr 2024 01:32:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 01:43:03 +0000   Sat, 20 Apr 2024 01:32:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 01:43:03 +0000   Sat, 20 Apr 2024 01:32:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.89
	  Hostname:    no-preload-338118
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b73ffd0cf75b41f8a91992d4edaf23be
	  System UUID:                b73ffd0c-f75b-41f8-a919-92d4edaf23be
	  Boot ID:                    168082aa-1171-464e-a3a5-292a54461c4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-8jvsz                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 coredns-7db6d8ff4d-lhnxg                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 etcd-no-preload-338118                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kube-apiserver-no-preload-338118             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-no-preload-338118    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-f57d9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-no-preload-338118             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 metrics-server-569cc877fc-xbwdm              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node no-preload-338118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node no-preload-338118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node no-preload-338118 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node no-preload-338118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node no-preload-338118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node no-preload-338118 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                node-controller  Node no-preload-338118 event: Registered Node no-preload-338118 in Controller
	
	
	==> dmesg <==
	[  +0.053930] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.141718] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.599165] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.759981] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.853303] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.055508] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060058] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.197901] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.154711] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.339769] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[ +17.537229] systemd-fstab-generator[1233]: Ignoring "noauto" option for root device
	[  +0.061044] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.835869] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	[Apr20 01:27] kauditd_printk_skb: 84 callbacks suppressed
	[ +31.927995] kauditd_printk_skb: 55 callbacks suppressed
	[Apr20 01:28] kauditd_printk_skb: 24 callbacks suppressed
	[Apr20 01:32] kauditd_printk_skb: 8 callbacks suppressed
	[  +1.346395] systemd-fstab-generator[4077]: Ignoring "noauto" option for root device
	[  +4.637309] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.946258] systemd-fstab-generator[4405]: Ignoring "noauto" option for root device
	[ +13.407161] systemd-fstab-generator[4599]: Ignoring "noauto" option for root device
	[  +0.113132] kauditd_printk_skb: 14 callbacks suppressed
	[Apr20 01:33] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [a3cefb8dc166047a93d63cc578aa1f38247d79417c2bf0a35d04fabebd1c159d] <==
	{"level":"info","ts":"2024-04-20T01:32:26.565609Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2a97606ea537aa00","initial-advertise-peer-urls":["https://192.168.72.89:2380"],"listen-peer-urls":["https://192.168.72.89:2380"],"advertise-client-urls":["https://192.168.72.89:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.89:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-20T01:32:26.565796Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-20T01:32:26.565917Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.89:2380"}
	{"level":"info","ts":"2024-04-20T01:32:26.565945Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.89:2380"}
	{"level":"info","ts":"2024-04-20T01:32:26.585426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a97606ea537aa00 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-20T01:32:26.585566Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a97606ea537aa00 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-20T01:32:26.585647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a97606ea537aa00 received MsgPreVoteResp from 2a97606ea537aa00 at term 1"}
	{"level":"info","ts":"2024-04-20T01:32:26.585686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a97606ea537aa00 became candidate at term 2"}
	{"level":"info","ts":"2024-04-20T01:32:26.585711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a97606ea537aa00 received MsgVoteResp from 2a97606ea537aa00 at term 2"}
	{"level":"info","ts":"2024-04-20T01:32:26.585737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a97606ea537aa00 became leader at term 2"}
	{"level":"info","ts":"2024-04-20T01:32:26.585762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2a97606ea537aa00 elected leader 2a97606ea537aa00 at term 2"}
	{"level":"info","ts":"2024-04-20T01:32:26.590549Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2a97606ea537aa00","local-member-attributes":"{Name:no-preload-338118 ClientURLs:[https://192.168.72.89:2379]}","request-path":"/0/members/2a97606ea537aa00/attributes","cluster-id":"3131bac5af784039","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-20T01:32:26.590792Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:32:26.59116Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:32:26.597334Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-20T01:32:26.597384Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T01:32:26.591422Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:32:26.599528Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3131bac5af784039","local-member-id":"2a97606ea537aa00","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:32:26.599669Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:32:26.602458Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:32:26.601164Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.89:2379"}
	{"level":"info","ts":"2024-04-20T01:32:26.603178Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-20T01:42:27.101095Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":721}
	{"level":"info","ts":"2024-04-20T01:42:27.112723Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":721,"took":"10.898913ms","hash":1111716376,"current-db-size-bytes":2273280,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2273280,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-04-20T01:42:27.112823Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1111716376,"revision":721,"compact-revision":-1}
	
	
	==> kernel <==
	 01:45:13 up 18 min,  0 users,  load average: 0.26, 0.18, 0.17
	Linux no-preload-338118 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [258f4b3a17cd33aaba1dc9bf1fb8fd978853aa0ca37193b2f22e68a87e36ac26] <==
	I0420 01:38:29.750452       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:40:29.749464       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:40:29.749567       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0420 01:40:29.749579       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:40:29.750697       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:40:29.750849       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 01:40:29.750886       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:42:28.753933       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:42:28.754060       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0420 01:42:29.754494       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:42:29.754618       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0420 01:42:29.754632       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:42:29.754777       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:42:29.754894       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 01:42:29.755837       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:43:29.755113       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:43:29.755231       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0420 01:43:29.755338       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0420 01:43:29.756241       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 01:43:29.756416       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 01:43:29.756451       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [41521be8a42d149366098d2a485d866fab1434a9b691ed6fc108fd46dde574fb] <==
	I0420 01:39:15.065541       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:39:44.539218       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:39:45.075546       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:40:14.546449       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:40:15.085221       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:40:44.553111       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:40:45.094837       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:41:14.559180       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:41:15.109523       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:41:44.567350       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:41:45.119770       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:42:14.574238       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:42:15.131138       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:42:44.581412       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:42:45.140454       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:43:14.589103       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:43:15.153857       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:43:44.595090       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:43:45.166212       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0420 01:44:00.675835       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="296.333µs"
	I0420 01:44:11.675727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="120.232µs"
	E0420 01:44:14.601049       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:44:15.180198       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0420 01:44:44.607960       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0420 01:44:45.191753       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [444f48e53751865d866ebde038d4b741a5f2c1e9a188f0835a9fb989c08122e6] <==
	I0420 01:32:46.019258       1 server_linux.go:69] "Using iptables proxy"
	I0420 01:32:46.042217       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.89"]
	I0420 01:32:46.353507       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 01:32:46.359516       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 01:32:46.359811       1 server_linux.go:165] "Using iptables Proxier"
	I0420 01:32:46.521268       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 01:32:46.521582       1 server.go:872] "Version info" version="v1.30.0"
	I0420 01:32:46.521601       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:32:46.540869       1 config.go:192] "Starting service config controller"
	I0420 01:32:46.540907       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 01:32:46.540943       1 config.go:101] "Starting endpoint slice config controller"
	I0420 01:32:46.540947       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 01:32:46.541946       1 config.go:319] "Starting node config controller"
	I0420 01:32:46.542076       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 01:32:46.641631       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 01:32:46.641708       1 shared_informer.go:320] Caches are synced for service config
	I0420 01:32:46.648329       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0c43ca20df1029f80954bdaaf50aa37d7544ef1606039b3384de429587e6fdab] <==
	W0420 01:32:28.760699       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 01:32:28.761677       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 01:32:29.682688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 01:32:29.682817       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 01:32:29.709578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0420 01:32:29.709675       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0420 01:32:29.711604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 01:32:29.711667       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 01:32:29.718270       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0420 01:32:29.718423       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0420 01:32:29.728740       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0420 01:32:29.728800       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0420 01:32:29.737965       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0420 01:32:29.737989       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0420 01:32:29.775591       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 01:32:29.775648       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 01:32:29.803652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 01:32:29.803736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0420 01:32:29.856164       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0420 01:32:29.856228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0420 01:32:29.909376       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0420 01:32:29.909432       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0420 01:32:29.926264       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0420 01:32:29.926422       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0420 01:32:32.533729       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 20 01:42:39 no-preload-338118 kubelet[4412]: E0420 01:42:39.658746    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:42:54 no-preload-338118 kubelet[4412]: E0420 01:42:54.657104    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:43:05 no-preload-338118 kubelet[4412]: E0420 01:43:05.657194    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:43:19 no-preload-338118 kubelet[4412]: E0420 01:43:19.656434    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:43:31 no-preload-338118 kubelet[4412]: E0420 01:43:31.680991    4412 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:43:31 no-preload-338118 kubelet[4412]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:43:31 no-preload-338118 kubelet[4412]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:43:31 no-preload-338118 kubelet[4412]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:43:31 no-preload-338118 kubelet[4412]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:43:33 no-preload-338118 kubelet[4412]: E0420 01:43:33.656139    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:43:45 no-preload-338118 kubelet[4412]: E0420 01:43:45.681976    4412 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 20 01:43:45 no-preload-338118 kubelet[4412]: E0420 01:43:45.682082    4412 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 20 01:43:45 no-preload-338118 kubelet[4412]: E0420 01:43:45.682490    4412 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lv2gf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-xbwdm_kube-system(798c7b61-a93d-4daf-a832-e15056a2ae24): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 20 01:43:45 no-preload-338118 kubelet[4412]: E0420 01:43:45.682580    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:44:00 no-preload-338118 kubelet[4412]: E0420 01:44:00.657735    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:44:11 no-preload-338118 kubelet[4412]: E0420 01:44:11.657801    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:44:24 no-preload-338118 kubelet[4412]: E0420 01:44:24.657075    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:44:31 no-preload-338118 kubelet[4412]: E0420 01:44:31.680989    4412 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:44:31 no-preload-338118 kubelet[4412]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:44:31 no-preload-338118 kubelet[4412]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:44:31 no-preload-338118 kubelet[4412]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:44:31 no-preload-338118 kubelet[4412]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:44:39 no-preload-338118 kubelet[4412]: E0420 01:44:39.657832    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:44:52 no-preload-338118 kubelet[4412]: E0420 01:44:52.655981    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	Apr 20 01:45:05 no-preload-338118 kubelet[4412]: E0420 01:45:05.657100    4412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xbwdm" podUID="798c7b61-a93d-4daf-a832-e15056a2ae24"
	
	
	==> storage-provisioner [3767b5a85864a238c42e3cc300293883e42c5502260fcced065898a395927031] <==
	I0420 01:32:47.280814       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0420 01:32:47.292136       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0420 01:32:47.292242       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0420 01:32:47.303702       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0420 01:32:47.303830       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-338118_eac38729-d5ab-4109-971a-c3e155be402a!
	I0420 01:32:47.304630       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ae8f39ae-31e9-464c-9832-008367d3cf14", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-338118_eac38729-d5ab-4109-971a-c3e155be402a became leader
	I0420 01:32:47.404706       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-338118_eac38729-d5ab-4109-971a-c3e155be402a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-338118 -n no-preload-338118
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-338118 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-xbwdm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-338118 describe pod metrics-server-569cc877fc-xbwdm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-338118 describe pod metrics-server-569cc877fc-xbwdm: exit status 1 (61.852035ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-xbwdm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-338118 describe pod metrics-server-569cc877fc-xbwdm: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (202.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (63.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:43:49.904546   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/kindnet-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
E0420 01:43:54.474874   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/enable-default-cni-831611/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.91:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.91:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-564860 -n old-k8s-version-564860
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-564860 -n old-k8s-version-564860: exit status 2 (259.071288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-564860" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-564860 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-564860 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.708µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-564860 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-564860 -n old-k8s-version-564860
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-564860 -n old-k8s-version-564860: exit status 2 (269.075049ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-564860 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-564860 logs -n 25: (1.586482661s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-831611                               | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-831611 sudo                          | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-831611                               | custom-flannel-831611        | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-172352 | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:16 UTC |
	|         | disable-driver-mounts-172352                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:16 UTC | 20 Apr 24 01:17 UTC |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-338118             | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:17 UTC | 20 Apr 24 01:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-907988  | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC | 20 Apr 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC |                     |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-269507            | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC | 20 Apr 24 01:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-564860        | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-338118                  | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-338118                                   | no-preload-338118            | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-907988       | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-907988 | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:30 UTC |
	|         | default-k8s-diff-port-907988                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-269507                 | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-269507                                  | embed-certs-269507           | jenkins | v1.33.0 | 20 Apr 24 01:20 UTC | 20 Apr 24 01:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-564860             | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC | 20 Apr 24 01:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-564860                              | old-k8s-version-564860       | jenkins | v1.33.0 | 20 Apr 24 01:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 01:21:33
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 01:21:33.400343  142411 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:21:33.400444  142411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:21:33.400452  142411 out.go:304] Setting ErrFile to fd 2...
	I0420 01:21:33.400464  142411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:21:33.400681  142411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:21:33.401213  142411 out.go:298] Setting JSON to false
	I0420 01:21:33.402151  142411 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14640,"bootTime":1713561453,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 01:21:33.402214  142411 start.go:139] virtualization: kvm guest
	I0420 01:21:33.404200  142411 out.go:177] * [old-k8s-version-564860] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 01:21:33.405933  142411 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:21:33.407240  142411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:21:33.405946  142411 notify.go:220] Checking for updates...
	I0420 01:21:33.408693  142411 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:21:33.409906  142411 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:21:33.411155  142411 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 01:21:33.412528  142411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:21:33.414062  142411 config.go:182] Loaded profile config "old-k8s-version-564860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:21:33.414460  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:21:33.414524  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:21:33.428987  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37585
	I0420 01:21:33.429348  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:21:33.429850  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:21:33.429873  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:21:33.430178  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:21:33.430370  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:21:33.431825  142411 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0420 01:21:33.432895  142411 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:21:33.433209  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:21:33.433251  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:21:33.447157  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42815
	I0420 01:21:33.447543  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:21:33.448080  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:21:33.448123  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:21:33.448444  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:21:33.448609  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:21:33.481664  142411 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 01:21:33.482784  142411 start.go:297] selected driver: kvm2
	I0420 01:21:33.482796  142411 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-5
64860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:21:33.482903  142411 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:21:33.483572  142411 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:21:33.483646  142411 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0420 01:21:33.497421  142411 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0420 01:21:33.497790  142411 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:21:33.497854  142411 cni.go:84] Creating CNI manager for ""
	I0420 01:21:33.497869  142411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:21:33.497915  142411 start.go:340] cluster config:
	{Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:21:33.498027  142411 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:21:33.499624  142411 out.go:177] * Starting "old-k8s-version-564860" primary control-plane node in "old-k8s-version-564860" cluster
	I0420 01:21:33.500874  142411 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:21:33.500901  142411 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0420 01:21:33.500914  142411 cache.go:56] Caching tarball of preloaded images
	I0420 01:21:33.500992  142411 preload.go:173] Found /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0420 01:21:33.501007  142411 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0420 01:21:33.501116  142411 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/config.json ...
	I0420 01:21:33.501613  142411 start.go:360] acquireMachinesLock for old-k8s-version-564860: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:21:35.817529  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:38.889617  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:44.969590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:48.041555  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:54.121550  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:21:57.193604  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:03.273575  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:06.345487  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:12.425567  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:15.497538  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:21.577563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:24.649534  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:30.729573  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:33.801566  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:39.881590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:42.953591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:49.033641  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:52.105579  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:22:58.185591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:01.257655  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:07.337585  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:10.409568  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:16.489562  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:19.561602  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:25.641579  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:28.713581  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:34.793618  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:37.865643  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:43.945593  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:47.017561  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:53.097597  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:23:56.169538  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:02.249561  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:05.321557  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:11.401563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:14.473539  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:20.553591  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:23.625573  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:29.705563  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:32.777590  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:38.857568  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:41.929619  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:48.009565  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:51.081536  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:24:57.161593  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:25:00.233633  141746 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.89:22: connect: no route to host
	I0420 01:25:03.237801  141927 start.go:364] duration metric: took 4m24.096402827s to acquireMachinesLock for "default-k8s-diff-port-907988"
	I0420 01:25:03.237873  141927 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:03.237883  141927 fix.go:54] fixHost starting: 
	I0420 01:25:03.238412  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:03.238453  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:03.254029  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36295
	I0420 01:25:03.254570  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:03.255071  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:25:03.255097  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:03.255474  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:03.255703  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:03.255871  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:25:03.257395  141927 fix.go:112] recreateIfNeeded on default-k8s-diff-port-907988: state=Stopped err=<nil>
	I0420 01:25:03.257430  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	W0420 01:25:03.257577  141927 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:03.259083  141927 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-907988" ...
	I0420 01:25:03.260199  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Start
	I0420 01:25:03.260402  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring networks are active...
	I0420 01:25:03.261176  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring network default is active
	I0420 01:25:03.261553  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Ensuring network mk-default-k8s-diff-port-907988 is active
	I0420 01:25:03.262016  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Getting domain xml...
	I0420 01:25:03.262834  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Creating domain...
	I0420 01:25:03.235208  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:03.235275  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:25:03.235620  141746 buildroot.go:166] provisioning hostname "no-preload-338118"
	I0420 01:25:03.235653  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:25:03.235902  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:25:03.237636  141746 machine.go:97] duration metric: took 4m37.412949021s to provisionDockerMachine
	I0420 01:25:03.237677  141746 fix.go:56] duration metric: took 4m37.433896084s for fixHost
	I0420 01:25:03.237685  141746 start.go:83] releasing machines lock for "no-preload-338118", held for 4m37.433927307s
	W0420 01:25:03.237715  141746 start.go:713] error starting host: provision: host is not running
	W0420 01:25:03.237980  141746 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0420 01:25:03.238076  141746 start.go:728] Will try again in 5 seconds ...
	I0420 01:25:04.453535  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting to get IP...
	I0420 01:25:04.454427  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.454803  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.454886  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.454785  143129 retry.go:31] will retry after 205.593849ms: waiting for machine to come up
	I0420 01:25:04.662560  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.663106  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.663133  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.663007  143129 retry.go:31] will retry after 246.821866ms: waiting for machine to come up
	I0420 01:25:04.911578  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.912067  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:04.912100  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:04.912014  143129 retry.go:31] will retry after 478.36287ms: waiting for machine to come up
	I0420 01:25:05.391624  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.392018  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.392063  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:05.391965  143129 retry.go:31] will retry after 495.387005ms: waiting for machine to come up
	I0420 01:25:05.888569  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.889093  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:05.889116  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:05.889009  143129 retry.go:31] will retry after 721.867239ms: waiting for machine to come up
	I0420 01:25:06.613018  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:06.613550  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:06.613583  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:06.613495  143129 retry.go:31] will retry after 724.502229ms: waiting for machine to come up
	I0420 01:25:07.339473  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:07.339924  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:07.339974  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:07.339883  143129 retry.go:31] will retry after 916.936196ms: waiting for machine to come up
	I0420 01:25:08.258657  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:08.259033  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:08.259064  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:08.258981  143129 retry.go:31] will retry after 1.088675043s: waiting for machine to come up
	I0420 01:25:08.239597  141746 start.go:360] acquireMachinesLock for no-preload-338118: {Name:mk13b4d07514800a45d583c31ae5b496189ee3e9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0420 01:25:09.349021  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:09.349421  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:09.349453  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:09.349362  143129 retry.go:31] will retry after 1.139610002s: waiting for machine to come up
	I0420 01:25:10.490715  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:10.491162  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:10.491190  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:10.491119  143129 retry.go:31] will retry after 1.625829976s: waiting for machine to come up
	I0420 01:25:12.118751  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:12.119231  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:12.119254  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:12.119184  143129 retry.go:31] will retry after 2.904309002s: waiting for machine to come up
	I0420 01:25:15.025713  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:15.026281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:15.026310  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:15.026227  143129 retry.go:31] will retry after 3.471792967s: waiting for machine to come up
	I0420 01:25:18.500247  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:18.500626  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | unable to find current IP address of domain default-k8s-diff-port-907988 in network mk-default-k8s-diff-port-907988
	I0420 01:25:18.500679  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | I0420 01:25:18.500595  143129 retry.go:31] will retry after 4.499766051s: waiting for machine to come up
	I0420 01:25:23.005446  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.005935  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Found IP for machine: 192.168.39.222
	I0420 01:25:23.005956  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Reserving static IP address...
	I0420 01:25:23.005970  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has current primary IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.006453  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-907988", mac: "52:54:00:c7:22:6d", ip: "192.168.39.222"} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.006479  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Reserved static IP address: 192.168.39.222
	I0420 01:25:23.006513  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | skip adding static IP to network mk-default-k8s-diff-port-907988 - found existing host DHCP lease matching {name: "default-k8s-diff-port-907988", mac: "52:54:00:c7:22:6d", ip: "192.168.39.222"}
	I0420 01:25:23.006537  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Waiting for SSH to be available...
	I0420 01:25:23.006544  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Getting to WaitForSSH function...
	I0420 01:25:23.009090  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.009505  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.009537  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.009658  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Using SSH client type: external
	I0420 01:25:23.009695  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa (-rw-------)
	I0420 01:25:23.009732  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:25:23.009748  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | About to run SSH command:
	I0420 01:25:23.009766  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | exit 0
	I0420 01:25:23.133489  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | SSH cmd err, output: <nil>: 
	I0420 01:25:23.133940  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetConfigRaw
	I0420 01:25:23.134589  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:23.137340  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.137685  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.137708  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.138000  141927 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/config.json ...
	I0420 01:25:23.138228  141927 machine.go:94] provisionDockerMachine start ...
	I0420 01:25:23.138253  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:23.138461  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.140536  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.140815  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.140841  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.141024  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.141244  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.141450  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.141595  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.141777  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.142053  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.142067  141927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:25:23.249946  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:25:23.249979  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.250250  141927 buildroot.go:166] provisioning hostname "default-k8s-diff-port-907988"
	I0420 01:25:23.250280  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.250483  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.253030  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.253422  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.253456  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.253564  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.253755  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.253978  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.254135  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.254334  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.254504  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.254517  141927 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-907988 && echo "default-k8s-diff-port-907988" | sudo tee /etc/hostname
	I0420 01:25:23.379061  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-907988
	
	I0420 01:25:23.379092  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.381893  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.382249  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.382278  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.382465  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.382666  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.382831  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.382939  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.383118  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.383324  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.383349  141927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-907988' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-907988/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-907988' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:25:23.499869  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:23.499903  141927 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:25:23.499932  141927 buildroot.go:174] setting up certificates
	I0420 01:25:23.499941  141927 provision.go:84] configureAuth start
	I0420 01:25:23.499950  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetMachineName
	I0420 01:25:23.500178  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:23.502735  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.503050  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.503085  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.503201  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.505586  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.505924  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.505968  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.506036  141927 provision.go:143] copyHostCerts
	I0420 01:25:23.506136  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:25:23.506150  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:25:23.506233  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:25:23.506386  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:25:23.506396  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:25:23.506444  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:25:23.506525  141927 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:25:23.506536  141927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:25:23.506569  141927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:25:23.506640  141927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-907988 san=[127.0.0.1 192.168.39.222 default-k8s-diff-port-907988 localhost minikube]
	I0420 01:25:23.598855  141927 provision.go:177] copyRemoteCerts
	I0420 01:25:23.598930  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:25:23.598967  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.602183  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.602516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.602544  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.602696  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.602903  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.603143  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.603301  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:23.688294  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:25:23.714719  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0420 01:25:23.744530  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:25:23.774733  141927 provision.go:87] duration metric: took 274.778779ms to configureAuth
	I0420 01:25:23.774756  141927 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:25:23.774990  141927 config.go:182] Loaded profile config "default-k8s-diff-port-907988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:25:23.775083  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:23.777817  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.778179  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:23.778213  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:23.778376  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:23.778596  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.778763  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:23.778984  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:23.779167  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:23.779364  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:23.779393  141927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:25:24.314463  142057 start.go:364] duration metric: took 4m32.915907541s to acquireMachinesLock for "embed-certs-269507"
	I0420 01:25:24.314618  142057 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:24.314645  142057 fix.go:54] fixHost starting: 
	I0420 01:25:24.315169  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:24.315220  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:24.331820  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43949
	I0420 01:25:24.332243  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:24.332707  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:25:24.332730  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:24.333157  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:24.333371  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:24.333551  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:25:24.335004  142057 fix.go:112] recreateIfNeeded on embed-certs-269507: state=Stopped err=<nil>
	I0420 01:25:24.335044  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	W0420 01:25:24.335211  142057 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:24.337246  142057 out.go:177] * Restarting existing kvm2 VM for "embed-certs-269507" ...
	I0420 01:25:24.056795  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:25:24.056832  141927 machine.go:97] duration metric: took 918.585863ms to provisionDockerMachine
	I0420 01:25:24.056849  141927 start.go:293] postStartSetup for "default-k8s-diff-port-907988" (driver="kvm2")
	I0420 01:25:24.056865  141927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:25:24.056889  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.057250  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:25:24.057281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.060602  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.060992  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.061028  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.061196  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.061422  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.061631  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.061785  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.152109  141927 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:25:24.157292  141927 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:25:24.157330  141927 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:25:24.157397  141927 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:25:24.157490  141927 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:25:24.157606  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:25:24.171039  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:24.201343  141927 start.go:296] duration metric: took 144.476748ms for postStartSetup
	I0420 01:25:24.201383  141927 fix.go:56] duration metric: took 20.963499628s for fixHost
	I0420 01:25:24.201409  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.204283  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.204648  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.204681  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.204842  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.205022  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.205204  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.205411  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.205732  141927 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:24.206255  141927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0420 01:25:24.206269  141927 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:25:24.314311  141927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576324.296261493
	
	I0420 01:25:24.314336  141927 fix.go:216] guest clock: 1713576324.296261493
	I0420 01:25:24.314346  141927 fix.go:229] Guest: 2024-04-20 01:25:24.296261493 +0000 UTC Remote: 2024-04-20 01:25:24.201388226 +0000 UTC m=+285.207728057 (delta=94.873267ms)
	I0420 01:25:24.314373  141927 fix.go:200] guest clock delta is within tolerance: 94.873267ms
	I0420 01:25:24.314380  141927 start.go:83] releasing machines lock for "default-k8s-diff-port-907988", held for 21.076529311s
	I0420 01:25:24.314420  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.314699  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:24.317281  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.317696  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.317731  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.317858  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318364  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318557  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:25:24.318664  141927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:25:24.318723  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.318833  141927 ssh_runner.go:195] Run: cat /version.json
	I0420 01:25:24.318862  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:25:24.321519  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321572  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321937  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.321968  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.321994  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:24.322011  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:24.322121  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.322233  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:25:24.322323  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.322502  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.322516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:25:24.322725  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:25:24.322730  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.322871  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:25:24.403742  141927 ssh_runner.go:195] Run: systemctl --version
	I0420 01:25:24.429207  141927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:25:24.590621  141927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:25:24.597818  141927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:25:24.597890  141927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:25:24.617031  141927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:25:24.617050  141927 start.go:494] detecting cgroup driver to use...
	I0420 01:25:24.617126  141927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:25:24.643134  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:25:24.658222  141927 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:25:24.658275  141927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:25:24.672409  141927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:25:24.686722  141927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:25:24.810871  141927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:25:24.965702  141927 docker.go:233] disabling docker service ...
	I0420 01:25:24.965765  141927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:25:24.984504  141927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:25:24.999580  141927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:25:25.151023  141927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:25:25.278443  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:25:25.295439  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:25:25.316425  141927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:25:25.316494  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.329052  141927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:25:25.329119  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.342102  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.354831  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.368084  141927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:25:25.380515  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.392952  141927 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.411707  141927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:25.423776  141927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:25:25.434175  141927 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:25:25.434234  141927 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:25:25.449180  141927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:25:25.460018  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:25.579669  141927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:25:25.741777  141927 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:25:25.741854  141927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:25:25.747422  141927 start.go:562] Will wait 60s for crictl version
	I0420 01:25:25.747478  141927 ssh_runner.go:195] Run: which crictl
	I0420 01:25:25.752164  141927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:25:25.800400  141927 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:25:25.800491  141927 ssh_runner.go:195] Run: crio --version
	I0420 01:25:25.832099  141927 ssh_runner.go:195] Run: crio --version
	I0420 01:25:25.865692  141927 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:25:24.338547  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Start
	I0420 01:25:24.338743  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring networks are active...
	I0420 01:25:24.339527  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring network default is active
	I0420 01:25:24.340064  142057 main.go:141] libmachine: (embed-certs-269507) Ensuring network mk-embed-certs-269507 is active
	I0420 01:25:24.340520  142057 main.go:141] libmachine: (embed-certs-269507) Getting domain xml...
	I0420 01:25:24.341363  142057 main.go:141] libmachine: (embed-certs-269507) Creating domain...
	I0420 01:25:25.566725  142057 main.go:141] libmachine: (embed-certs-269507) Waiting to get IP...
	I0420 01:25:25.567704  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:25.568195  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:25.568263  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:25.568160  143271 retry.go:31] will retry after 229.672507ms: waiting for machine to come up
	I0420 01:25:25.799515  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:25.799964  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:25.799994  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:25.799916  143271 retry.go:31] will retry after 352.048372ms: waiting for machine to come up
	I0420 01:25:26.153710  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:26.154217  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:26.154245  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:26.154159  143271 retry.go:31] will retry after 451.404487ms: waiting for machine to come up
	I0420 01:25:25.867283  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetIP
	I0420 01:25:25.870225  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:25.870725  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:25:25.870748  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:25:25.871001  141927 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0420 01:25:25.875986  141927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:25.890923  141927 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-907988 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907
988 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:25:25.891043  141927 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:25:25.891088  141927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:25.934665  141927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:25:25.934743  141927 ssh_runner.go:195] Run: which lz4
	I0420 01:25:25.939157  141927 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:25:25.943759  141927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:25:25.943788  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:25:27.674416  141927 crio.go:462] duration metric: took 1.735279369s to copy over tarball
	I0420 01:25:27.674484  141927 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:25:26.607751  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:26.608327  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:26.608362  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:26.608273  143271 retry.go:31] will retry after 548.149542ms: waiting for machine to come up
	I0420 01:25:27.157746  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:27.158193  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:27.158220  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:27.158158  143271 retry.go:31] will retry after 543.066807ms: waiting for machine to come up
	I0420 01:25:27.702417  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:27.702812  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:27.702842  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:27.702778  143271 retry.go:31] will retry after 801.842999ms: waiting for machine to come up
	I0420 01:25:28.505673  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:28.506233  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:28.506264  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:28.506169  143271 retry.go:31] will retry after 1.176665861s: waiting for machine to come up
	I0420 01:25:29.684134  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:29.684642  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:29.684676  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:29.684582  143271 retry.go:31] will retry after 1.09397916s: waiting for machine to come up
	I0420 01:25:30.780467  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:30.780962  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:30.780987  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:30.780924  143271 retry.go:31] will retry after 1.560706704s: waiting for machine to come up
	I0420 01:25:30.280138  141927 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.605620888s)
	I0420 01:25:30.280235  141927 crio.go:469] duration metric: took 2.605784372s to extract the tarball
	I0420 01:25:30.280269  141927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:25:30.323590  141927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:30.384053  141927 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:25:30.384083  141927 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:25:30.384094  141927 kubeadm.go:928] updating node { 192.168.39.222 8444 v1.30.0 crio true true} ...
	I0420 01:25:30.384258  141927 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-907988 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907988 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:25:30.384347  141927 ssh_runner.go:195] Run: crio config
	I0420 01:25:30.431033  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:25:30.431059  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:30.431074  141927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:25:30.431094  141927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.222 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-907988 NodeName:default-k8s-diff-port-907988 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:25:30.431267  141927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.222
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-907988"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:25:30.431327  141927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:25:30.444735  141927 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:25:30.444807  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:25:30.457543  141927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0420 01:25:30.477858  141927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:25:30.497632  141927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0420 01:25:30.518062  141927 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I0420 01:25:30.522820  141927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:30.538677  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:30.686290  141927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:25:30.721316  141927 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988 for IP: 192.168.39.222
	I0420 01:25:30.721342  141927 certs.go:194] generating shared ca certs ...
	I0420 01:25:30.721373  141927 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:25:30.721607  141927 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:25:30.721664  141927 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:25:30.721679  141927 certs.go:256] generating profile certs ...
	I0420 01:25:30.721789  141927 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/client.key
	I0420 01:25:30.721873  141927 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.key.b8de10ae
	I0420 01:25:30.721912  141927 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.key
	I0420 01:25:30.722019  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:25:30.722052  141927 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:25:30.722067  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:25:30.722094  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:25:30.722122  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:25:30.722144  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:25:30.722189  141927 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:30.723048  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:25:30.762666  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:25:30.800218  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:25:30.849282  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:25:30.893355  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0420 01:25:30.924642  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:25:30.956734  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:25:30.986491  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/default-k8s-diff-port-907988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:25:31.015876  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:25:31.043860  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:25:31.073822  141927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:25:31.100731  141927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:25:31.119908  141927 ssh_runner.go:195] Run: openssl version
	I0420 01:25:31.128209  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:25:31.140164  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.145371  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.145432  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:31.151726  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:25:31.163371  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:25:31.175115  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.180237  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.180286  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:25:31.186548  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:25:31.198703  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:25:31.211529  141927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.217258  141927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.217326  141927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:25:31.223822  141927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:25:31.236363  141927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:25:31.241793  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:25:31.250826  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:25:31.259850  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:25:31.267387  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:25:31.274477  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:25:31.281452  141927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:25:31.287980  141927 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-907988 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-907988
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:25:31.288094  141927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:25:31.288159  141927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:31.344552  141927 cri.go:89] found id: ""
	I0420 01:25:31.344646  141927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:25:31.357049  141927 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:25:31.357075  141927 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:25:31.357081  141927 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:25:31.357147  141927 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:25:31.368636  141927 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:25:31.370055  141927 kubeconfig.go:125] found "default-k8s-diff-port-907988" server: "https://192.168.39.222:8444"
	I0420 01:25:31.373063  141927 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:25:31.384821  141927 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.222
	I0420 01:25:31.384861  141927 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:25:31.384876  141927 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:25:31.384946  141927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:31.432801  141927 cri.go:89] found id: ""
	I0420 01:25:31.432902  141927 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:25:31.458842  141927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:25:31.472706  141927 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:25:31.472728  141927 kubeadm.go:156] found existing configuration files:
	
	I0420 01:25:31.472780  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0420 01:25:31.486221  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:25:31.486276  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:25:31.500036  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0420 01:25:31.510180  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:25:31.510237  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:25:31.520560  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0420 01:25:31.530333  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:25:31.530387  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:25:31.541053  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0420 01:25:31.551200  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:25:31.551257  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:25:31.561364  141927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:25:31.572967  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:31.690537  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.319980  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.546554  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.631937  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:32.729738  141927 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:25:32.729838  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.230769  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.730452  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:33.807772  141927 api_server.go:72] duration metric: took 1.07803345s to wait for apiserver process to appear ...
	I0420 01:25:33.807805  141927 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:25:33.807829  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:33.808551  141927 api_server.go:269] stopped: https://192.168.39.222:8444/healthz: Get "https://192.168.39.222:8444/healthz": dial tcp 192.168.39.222:8444: connect: connection refused
	I0420 01:25:32.342951  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:32.343373  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:32.343420  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:32.343352  143271 retry.go:31] will retry after 1.871100952s: waiting for machine to come up
	I0420 01:25:34.215884  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:34.216313  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:34.216341  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:34.216253  143271 retry.go:31] will retry after 2.017753728s: waiting for machine to come up
	I0420 01:25:36.237296  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:36.237906  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:36.237936  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:36.237856  143271 retry.go:31] will retry after 3.431912056s: waiting for machine to come up
	I0420 01:25:34.308465  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.098889  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:37.098928  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:37.098945  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.149496  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:37.149534  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:37.308936  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.313975  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:37.314005  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:37.808680  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:37.818747  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:37.818784  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:38.307905  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:38.318528  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:38.318563  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:38.808127  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:38.816135  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:38.816167  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:39.307985  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:39.313712  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:39.313753  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:39.808225  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:39.812825  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:39.812858  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:40.308366  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:40.312930  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:40.312970  141927 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:40.808320  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:25:40.812979  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 200:
	ok
	I0420 01:25:40.820265  141927 api_server.go:141] control plane version: v1.30.0
	I0420 01:25:40.820289  141927 api_server.go:131] duration metric: took 7.012476869s to wait for apiserver health ...
	I0420 01:25:40.820298  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:25:40.820304  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:40.822367  141927 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:25:39.671070  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:39.671556  142057 main.go:141] libmachine: (embed-certs-269507) DBG | unable to find current IP address of domain embed-certs-269507 in network mk-embed-certs-269507
	I0420 01:25:39.671614  142057 main.go:141] libmachine: (embed-certs-269507) DBG | I0420 01:25:39.671502  143271 retry.go:31] will retry after 3.954438708s: waiting for machine to come up
	I0420 01:25:40.823843  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:25:40.837960  141927 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:25:40.858294  141927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:25:40.867542  141927 system_pods.go:59] 8 kube-system pods found
	I0420 01:25:40.867577  141927 system_pods.go:61] "coredns-7db6d8ff4d-7v886" [0e0b3a5f-041a-4bbc-94aa-c9571a8761ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:25:40.867584  141927 system_pods.go:61] "etcd-default-k8s-diff-port-907988" [88f687c4-8865-4fe6-92f1-448cfde6117c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:25:40.867590  141927 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-907988" [2c9f0d90-35c6-45ad-b9b1-9504c55a1e18] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:25:40.867597  141927 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-907988" [949ce449-06b4-4650-8ba0-7567637d6aec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:25:40.867604  141927 system_pods.go:61] "kube-proxy-dg6xn" [1124d9e8-41aa-44a9-8a4a-eafd2cd6c6c9] Running
	I0420 01:25:40.867626  141927 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-907988" [df93de11-c23d-4f5d-afd4-1af7928933fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0420 01:25:40.867640  141927 system_pods.go:61] "metrics-server-569cc877fc-rqqlt" [2c7d91c3-fce8-4603-a7be-8d9b415d71f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:25:40.867647  141927 system_pods.go:61] "storage-provisioner" [af4dc99d-feef-4c24-852a-4c8cad22dd7d] Running
	I0420 01:25:40.867654  141927 system_pods.go:74] duration metric: took 9.33485ms to wait for pod list to return data ...
	I0420 01:25:40.867670  141927 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:25:40.871045  141927 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:25:40.871067  141927 node_conditions.go:123] node cpu capacity is 2
	I0420 01:25:40.871078  141927 node_conditions.go:105] duration metric: took 3.402743ms to run NodePressure ...
	I0420 01:25:40.871094  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:41.142438  141927 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:25:41.151801  141927 kubeadm.go:733] kubelet initialised
	I0420 01:25:41.151822  141927 kubeadm.go:734] duration metric: took 9.359538ms waiting for restarted kubelet to initialise ...
	I0420 01:25:41.151830  141927 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:25:41.160583  141927 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.169184  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.169214  141927 pod_ready.go:81] duration metric: took 8.596607ms for pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.169226  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "coredns-7db6d8ff4d-7v886" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.169234  141927 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.175518  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.175544  141927 pod_ready.go:81] duration metric: took 6.298273ms for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.175558  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.175567  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.189038  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.189062  141927 pod_ready.go:81] duration metric: took 13.484198ms for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.189072  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.189078  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.261162  141927 pod_ready.go:97] node "default-k8s-diff-port-907988" hosting pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.261191  141927 pod_ready.go:81] duration metric: took 72.106763ms for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	E0420 01:25:41.261203  141927 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-907988" hosting pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-907988" has status "Ready":"False"
	I0420 01:25:41.261210  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dg6xn" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.662532  141927 pod_ready.go:92] pod "kube-proxy-dg6xn" in "kube-system" namespace has status "Ready":"True"
	I0420 01:25:41.662553  141927 pod_ready.go:81] duration metric: took 401.337101ms for pod "kube-proxy-dg6xn" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:41.662562  141927 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:43.670281  141927 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:45.122924  142411 start.go:364] duration metric: took 4m11.621269498s to acquireMachinesLock for "old-k8s-version-564860"
	I0420 01:25:45.122996  142411 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:25:45.123018  142411 fix.go:54] fixHost starting: 
	I0420 01:25:45.123538  142411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:25:45.123581  142411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:25:45.141340  142411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0420 01:25:45.141873  142411 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:25:45.142555  142411 main.go:141] libmachine: Using API Version  1
	I0420 01:25:45.142592  142411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:25:45.142979  142411 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:25:45.143234  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:25:45.143426  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetState
	I0420 01:25:45.145067  142411 fix.go:112] recreateIfNeeded on old-k8s-version-564860: state=Stopped err=<nil>
	I0420 01:25:45.145114  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	W0420 01:25:45.145289  142411 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:25:45.147498  142411 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-564860" ...
	I0420 01:25:43.630616  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.631126  142057 main.go:141] libmachine: (embed-certs-269507) Found IP for machine: 192.168.50.184
	I0420 01:25:43.631159  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has current primary IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.631173  142057 main.go:141] libmachine: (embed-certs-269507) Reserving static IP address...
	I0420 01:25:43.631625  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "embed-certs-269507", mac: "52:54:00:5d:0f:ba", ip: "192.168.50.184"} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.631677  142057 main.go:141] libmachine: (embed-certs-269507) DBG | skip adding static IP to network mk-embed-certs-269507 - found existing host DHCP lease matching {name: "embed-certs-269507", mac: "52:54:00:5d:0f:ba", ip: "192.168.50.184"}
	I0420 01:25:43.631692  142057 main.go:141] libmachine: (embed-certs-269507) Reserved static IP address: 192.168.50.184
	I0420 01:25:43.631710  142057 main.go:141] libmachine: (embed-certs-269507) Waiting for SSH to be available...
	I0420 01:25:43.631731  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Getting to WaitForSSH function...
	I0420 01:25:43.634292  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.634614  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.634650  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.634833  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Using SSH client type: external
	I0420 01:25:43.634883  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa (-rw-------)
	I0420 01:25:43.634916  142057 main.go:141] libmachine: (embed-certs-269507) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:25:43.634935  142057 main.go:141] libmachine: (embed-certs-269507) DBG | About to run SSH command:
	I0420 01:25:43.634949  142057 main.go:141] libmachine: (embed-certs-269507) DBG | exit 0
	I0420 01:25:43.757712  142057 main.go:141] libmachine: (embed-certs-269507) DBG | SSH cmd err, output: <nil>: 
	I0420 01:25:43.758118  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetConfigRaw
	I0420 01:25:43.758820  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:43.761626  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.762007  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.762083  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.762328  142057 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/config.json ...
	I0420 01:25:43.762556  142057 machine.go:94] provisionDockerMachine start ...
	I0420 01:25:43.762575  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:43.762827  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:43.765841  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.766277  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.766304  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.766461  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:43.766636  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.766766  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.766884  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:43.767111  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:43.767371  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:43.767386  142057 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:25:43.874709  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:25:43.874741  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:43.875018  142057 buildroot.go:166] provisioning hostname "embed-certs-269507"
	I0420 01:25:43.875052  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:43.875265  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:43.878226  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.878645  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:43.878675  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:43.878767  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:43.878976  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.879120  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:43.879246  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:43.879375  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:43.879585  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:43.879613  142057 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-269507 && echo "embed-certs-269507" | sudo tee /etc/hostname
	I0420 01:25:44.003458  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-269507
	
	I0420 01:25:44.003502  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.006277  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.006706  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.006745  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.006922  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.007227  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.007417  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.007604  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.007772  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:44.007959  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:44.007979  142057 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-269507' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-269507/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-269507' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:25:44.124457  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:25:44.124494  142057 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:25:44.124516  142057 buildroot.go:174] setting up certificates
	I0420 01:25:44.124526  142057 provision.go:84] configureAuth start
	I0420 01:25:44.124537  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetMachineName
	I0420 01:25:44.124850  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:44.127589  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.127958  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.127980  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.128196  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.130485  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.130792  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.130830  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.130992  142057 provision.go:143] copyHostCerts
	I0420 01:25:44.131060  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:25:44.131075  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:25:44.131132  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:25:44.131237  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:25:44.131246  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:25:44.131266  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:25:44.131326  142057 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:25:44.131333  142057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:25:44.131349  142057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:25:44.131397  142057 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.embed-certs-269507 san=[127.0.0.1 192.168.50.184 embed-certs-269507 localhost minikube]
	I0420 01:25:44.404404  142057 provision.go:177] copyRemoteCerts
	I0420 01:25:44.404469  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:25:44.404498  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.407318  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.407650  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.407683  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.407850  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.408033  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.408182  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.408307  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:44.498069  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:25:44.524979  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0420 01:25:44.553537  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 01:25:44.580307  142057 provision.go:87] duration metric: took 455.767679ms to configureAuth
	I0420 01:25:44.580332  142057 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:25:44.580609  142057 config.go:182] Loaded profile config "embed-certs-269507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:25:44.580722  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.583352  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.583728  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.583761  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.583978  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.584205  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.584383  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.584516  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.584715  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:44.584905  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:44.584926  142057 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:25:44.882565  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:25:44.882599  142057 machine.go:97] duration metric: took 1.120028956s to provisionDockerMachine
	I0420 01:25:44.882612  142057 start.go:293] postStartSetup for "embed-certs-269507" (driver="kvm2")
	I0420 01:25:44.882622  142057 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:25:44.882639  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:44.882971  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:25:44.883012  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:44.885829  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.886181  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:44.886208  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:44.886372  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:44.886598  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:44.886761  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:44.886915  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:44.972428  142057 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:25:44.977228  142057 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:25:44.977257  142057 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:25:44.977344  142057 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:25:44.977435  142057 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:25:44.977552  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:25:44.987372  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:45.014435  142057 start.go:296] duration metric: took 131.807177ms for postStartSetup
	I0420 01:25:45.014484  142057 fix.go:56] duration metric: took 20.699839101s for fixHost
	I0420 01:25:45.014512  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.017361  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.017768  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.017795  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.017943  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.018150  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.018302  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.018421  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.018643  142057 main.go:141] libmachine: Using SSH client type: native
	I0420 01:25:45.018815  142057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0420 01:25:45.018827  142057 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:25:45.122766  142057 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576345.101529100
	
	I0420 01:25:45.122788  142057 fix.go:216] guest clock: 1713576345.101529100
	I0420 01:25:45.122796  142057 fix.go:229] Guest: 2024-04-20 01:25:45.1015291 +0000 UTC Remote: 2024-04-20 01:25:45.014489313 +0000 UTC m=+293.764572165 (delta=87.039787ms)
	I0420 01:25:45.122823  142057 fix.go:200] guest clock delta is within tolerance: 87.039787ms
	I0420 01:25:45.122828  142057 start.go:83] releasing machines lock for "embed-certs-269507", held for 20.808247089s
	I0420 01:25:45.122851  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.123156  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:45.125956  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.126377  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.126408  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.126536  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127059  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127264  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:25:45.127349  142057 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:25:45.127404  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.127470  142057 ssh_runner.go:195] Run: cat /version.json
	I0420 01:25:45.127497  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:25:45.130071  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130393  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130427  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.130447  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130727  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.130825  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:45.130854  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:45.130932  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.131041  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:25:45.131115  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.131220  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:45.131301  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:25:45.131451  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:25:45.131597  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:25:45.211824  142057 ssh_runner.go:195] Run: systemctl --version
	I0420 01:25:45.236425  142057 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:25:45.383069  142057 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:25:45.391072  142057 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:25:45.391159  142057 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:25:45.410287  142057 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:25:45.410313  142057 start.go:494] detecting cgroup driver to use...
	I0420 01:25:45.410395  142057 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:25:45.433663  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:25:45.452933  142057 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:25:45.452999  142057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:25:45.473208  142057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:25:45.493261  142057 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:25:45.650111  142057 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:25:45.847482  142057 docker.go:233] disabling docker service ...
	I0420 01:25:45.847559  142057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:25:45.871032  142057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:25:45.892747  142057 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:25:46.076222  142057 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:25:46.218078  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:25:46.236006  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:25:46.259279  142057 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:25:46.259363  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.272573  142057 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:25:46.272647  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.286468  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.298708  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.313197  142057 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:25:46.332844  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.345531  142057 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.367686  142057 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:25:46.379702  142057 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:25:46.390491  142057 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:25:46.390558  142057 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:25:46.406027  142057 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:25:46.417370  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:46.543690  142057 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:25:46.725507  142057 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:25:46.725599  142057 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:25:46.734173  142057 start.go:562] Will wait 60s for crictl version
	I0420 01:25:46.734246  142057 ssh_runner.go:195] Run: which crictl
	I0420 01:25:46.740381  142057 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:25:46.801341  142057 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:25:46.801431  142057 ssh_runner.go:195] Run: crio --version
	I0420 01:25:46.843121  142057 ssh_runner.go:195] Run: crio --version
	I0420 01:25:46.889958  142057 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:25:45.148885  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .Start
	I0420 01:25:45.149115  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring networks are active...
	I0420 01:25:45.149856  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring network default is active
	I0420 01:25:45.150205  142411 main.go:141] libmachine: (old-k8s-version-564860) Ensuring network mk-old-k8s-version-564860 is active
	I0420 01:25:45.150615  142411 main.go:141] libmachine: (old-k8s-version-564860) Getting domain xml...
	I0420 01:25:45.151296  142411 main.go:141] libmachine: (old-k8s-version-564860) Creating domain...
	I0420 01:25:46.465532  142411 main.go:141] libmachine: (old-k8s-version-564860) Waiting to get IP...
	I0420 01:25:46.466816  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.467306  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.467383  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.467288  143434 retry.go:31] will retry after 265.980653ms: waiting for machine to come up
	I0420 01:25:46.735144  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.735676  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.735700  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.735627  143434 retry.go:31] will retry after 254.534112ms: waiting for machine to come up
	I0420 01:25:46.992222  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:46.992707  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:46.992738  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:46.992621  143434 retry.go:31] will retry after 434.179962ms: waiting for machine to come up
	I0420 01:25:47.428397  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:47.428949  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:47.428987  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:47.428899  143434 retry.go:31] will retry after 533.143168ms: waiting for machine to come up
	I0420 01:25:47.963467  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:47.964008  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:47.964035  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:47.963957  143434 retry.go:31] will retry after 601.536298ms: waiting for machine to come up
	I0420 01:25:45.675159  141927 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:48.175457  141927 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:25:48.175487  141927 pod_ready.go:81] duration metric: took 6.512916578s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:48.175499  141927 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:46.891233  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetIP
	I0420 01:25:46.894647  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:46.895107  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:25:46.895170  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:25:46.895398  142057 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0420 01:25:46.900604  142057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:46.920025  142057 kubeadm.go:877] updating cluster {Name:embed-certs-269507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:25:46.920184  142057 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:25:46.920247  142057 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:46.967086  142057 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:25:46.967171  142057 ssh_runner.go:195] Run: which lz4
	I0420 01:25:46.973391  142057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:25:46.979210  142057 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:25:46.979241  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0420 01:25:48.806615  142057 crio.go:462] duration metric: took 1.83326325s to copy over tarball
	I0420 01:25:48.806701  142057 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:25:48.567922  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:48.568436  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:48.568469  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:48.568387  143434 retry.go:31] will retry after 853.809635ms: waiting for machine to come up
	I0420 01:25:49.423590  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:49.424154  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:49.424178  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:49.424099  143434 retry.go:31] will retry after 1.096859163s: waiting for machine to come up
	I0420 01:25:50.522906  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:50.523406  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:50.523436  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:50.523350  143434 retry.go:31] will retry after 983.057252ms: waiting for machine to come up
	I0420 01:25:51.508033  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:51.508557  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:51.508596  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:51.508497  143434 retry.go:31] will retry after 1.463876638s: waiting for machine to come up
	I0420 01:25:52.974032  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:52.974508  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:52.974536  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:52.974459  143434 retry.go:31] will retry after 1.859889372s: waiting for machine to come up
	I0420 01:25:50.183489  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:53.262055  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:51.389972  142057 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.583237436s)
	I0420 01:25:51.390002  142057 crio.go:469] duration metric: took 2.583356337s to extract the tarball
	I0420 01:25:51.390010  142057 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:25:51.434741  142057 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:25:51.489945  142057 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:25:51.489974  142057 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:25:51.489984  142057 kubeadm.go:928] updating node { 192.168.50.184 8443 v1.30.0 crio true true} ...
	I0420 01:25:51.490126  142057 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-269507 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:25:51.490226  142057 ssh_runner.go:195] Run: crio config
	I0420 01:25:51.548273  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:25:51.548299  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:25:51.548316  142057 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:25:51.548356  142057 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-269507 NodeName:embed-certs-269507 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:25:51.548534  142057 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-269507"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:25:51.548614  142057 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:25:51.560359  142057 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:25:51.560428  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:25:51.571609  142057 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0420 01:25:51.594462  142057 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:25:51.621417  142057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0420 01:25:51.649250  142057 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0420 01:25:51.655304  142057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:25:51.675476  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:25:51.809652  142057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:25:51.829341  142057 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507 for IP: 192.168.50.184
	I0420 01:25:51.829405  142057 certs.go:194] generating shared ca certs ...
	I0420 01:25:51.829430  142057 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:25:51.829627  142057 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:25:51.829687  142057 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:25:51.829697  142057 certs.go:256] generating profile certs ...
	I0420 01:25:51.829823  142057 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/client.key
	I0420 01:25:52.088423  142057 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.key.c1e63643
	I0420 01:25:52.088542  142057 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.key
	I0420 01:25:52.088748  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:25:52.088811  142057 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:25:52.088841  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:25:52.088880  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:25:52.088919  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:25:52.088959  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:25:52.089020  142057 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:25:52.090046  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:25:52.130739  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:25:52.163426  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:25:52.202470  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:25:52.232070  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0420 01:25:52.265640  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:25:52.305670  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:25:52.336788  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/embed-certs-269507/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:25:52.371507  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:25:52.403015  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:25:52.433761  142057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:25:52.461373  142057 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:25:52.480675  142057 ssh_runner.go:195] Run: openssl version
	I0420 01:25:52.486965  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:25:52.499466  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.506355  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.506409  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:25:52.514625  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:25:52.530107  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:25:52.544051  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.549426  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.549495  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:25:52.555960  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:25:52.569332  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:25:52.583057  142057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.588323  142057 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.588390  142057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:25:52.594622  142057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:25:52.607021  142057 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:25:52.612270  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:25:52.619182  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:25:52.626168  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:25:52.633276  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:25:52.639840  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:25:52.646478  142057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:25:52.652982  142057 kubeadm.go:391] StartCluster: {Name:embed-certs-269507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-269507 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:25:52.653130  142057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:25:52.653182  142057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:52.699113  142057 cri.go:89] found id: ""
	I0420 01:25:52.699200  142057 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:25:52.712835  142057 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:25:52.712859  142057 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:25:52.712867  142057 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:25:52.712914  142057 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:25:52.726130  142057 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:25:52.727354  142057 kubeconfig.go:125] found "embed-certs-269507" server: "https://192.168.50.184:8443"
	I0420 01:25:52.729600  142057 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:25:52.744185  142057 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.184
	I0420 01:25:52.744217  142057 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:25:52.744231  142057 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:25:52.744292  142057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:25:52.792889  142057 cri.go:89] found id: ""
	I0420 01:25:52.792967  142057 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:25:52.812771  142057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:25:52.824478  142057 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:25:52.824495  142057 kubeadm.go:156] found existing configuration files:
	
	I0420 01:25:52.824533  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:25:52.835612  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:25:52.835679  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:25:52.847089  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:25:52.858049  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:25:52.858126  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:25:52.872787  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:25:52.886588  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:25:52.886649  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:25:52.899467  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:25:52.910884  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:25:52.910942  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:25:52.922217  142057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:25:52.933432  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:53.108167  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.044709  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.257949  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.327450  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:25:54.426738  142057 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:25:54.426849  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:54.926955  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:55.427198  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:25:55.489075  142057 api_server.go:72] duration metric: took 1.06233038s to wait for apiserver process to appear ...
	I0420 01:25:55.489109  142057 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:25:55.489137  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:55.489682  142057 api_server.go:269] stopped: https://192.168.50.184:8443/healthz: Get "https://192.168.50.184:8443/healthz": dial tcp 192.168.50.184:8443: connect: connection refused
	I0420 01:25:55.989278  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:54.836137  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:54.836639  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:54.836670  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:54.836584  143434 retry.go:31] will retry after 2.172259495s: waiting for machine to come up
	I0420 01:25:57.011412  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:57.011810  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:57.011840  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:57.011782  143434 retry.go:31] will retry after 2.279304552s: waiting for machine to come up
	I0420 01:25:55.684205  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:57.686312  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:25:58.334562  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:58.334594  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:58.334614  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.344779  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:25:58.344814  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:25:58.490111  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.499158  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:58.499194  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:58.989417  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:58.996443  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:58.996477  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:59.489585  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:59.496235  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:25:59.496271  142057 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:25:59.989892  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:25:59.994154  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0420 01:26:00.000276  142057 api_server.go:141] control plane version: v1.30.0
	I0420 01:26:00.000301  142057 api_server.go:131] duration metric: took 4.511183577s to wait for apiserver health ...
	I0420 01:26:00.000311  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:26:00.000317  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:00.002217  142057 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:26:00.003646  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:26:00.018114  142057 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:26:00.040866  142057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:26:00.050481  142057 system_pods.go:59] 8 kube-system pods found
	I0420 01:26:00.050514  142057 system_pods.go:61] "coredns-7db6d8ff4d-79bzc" [af5f0029-75b5-4131-8c60-5a4fee48c618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:26:00.050524  142057 system_pods.go:61] "etcd-embed-certs-269507" [d6dfc301-0cfb-4bfb-99f7-948b77b38f53] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:26:00.050533  142057 system_pods.go:61] "kube-apiserver-embed-certs-269507" [915deee2-f571-4337-bcdc-07f40d06b9c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:26:00.050539  142057 system_pods.go:61] "kube-controller-manager-embed-certs-269507" [21c885b0-6d1b-4593-87f3-141e512af7dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:26:00.050545  142057 system_pods.go:61] "kube-proxy-crzk6" [d5972e9a-15cd-4b62-90d5-c10bdfa20989] Running
	I0420 01:26:00.050553  142057 system_pods.go:61] "kube-scheduler-embed-certs-269507" [1e556102-d4c9-494c-baf2-ab7e62d7d1e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0420 01:26:00.050559  142057 system_pods.go:61] "metrics-server-569cc877fc-8s79l" [1dc06e4a-3f47-4ef1-8757-81262c52fe55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:26:00.050583  142057 system_pods.go:61] "storage-provisioner" [f7b03907-0042-48d8-981b-1b8e665d58e7] Running
	I0420 01:26:00.050600  142057 system_pods.go:74] duration metric: took 9.699819ms to wait for pod list to return data ...
	I0420 01:26:00.050608  142057 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:26:00.053915  142057 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:26:00.053964  142057 node_conditions.go:123] node cpu capacity is 2
	I0420 01:26:00.053975  142057 node_conditions.go:105] duration metric: took 3.363162ms to run NodePressure ...
	I0420 01:26:00.053994  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:00.327736  142057 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:26:00.332409  142057 kubeadm.go:733] kubelet initialised
	I0420 01:26:00.332434  142057 kubeadm.go:734] duration metric: took 4.671334ms waiting for restarted kubelet to initialise ...
	I0420 01:26:00.332446  142057 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:26:00.338296  142057 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace to be "Ready" ...
	I0420 01:25:59.292382  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:25:59.292905  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:25:59.292939  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:25:59.292852  143434 retry.go:31] will retry after 4.056028382s: waiting for machine to come up
	I0420 01:26:03.350591  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:03.351022  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | unable to find current IP address of domain old-k8s-version-564860 in network mk-old-k8s-version-564860
	I0420 01:26:03.351047  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | I0420 01:26:03.350978  143434 retry.go:31] will retry after 5.38819739s: waiting for machine to come up
	I0420 01:26:00.184338  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:02.684685  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:02.345607  142057 pod_ready.go:102] pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:03.850887  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:03.850915  142057 pod_ready.go:81] duration metric: took 3.512592061s for pod "coredns-7db6d8ff4d-79bzc" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:03.850929  142057 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:05.857665  142057 pod_ready.go:102] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:05.183082  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:07.682906  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:10.191165  141746 start.go:364] duration metric: took 1m1.9514957s to acquireMachinesLock for "no-preload-338118"
	I0420 01:26:10.191222  141746 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:26:10.191235  141746 fix.go:54] fixHost starting: 
	I0420 01:26:10.191624  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:26:10.191668  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:26:10.212169  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34829
	I0420 01:26:10.212568  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:26:10.213074  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:26:10.213120  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:26:10.213524  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:26:10.213755  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:10.213957  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:26:10.215578  141746 fix.go:112] recreateIfNeeded on no-preload-338118: state=Stopped err=<nil>
	I0420 01:26:10.215604  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	W0420 01:26:10.215788  141746 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:26:10.217632  141746 out.go:177] * Restarting existing kvm2 VM for "no-preload-338118" ...
	I0420 01:26:10.218915  141746 main.go:141] libmachine: (no-preload-338118) Calling .Start
	I0420 01:26:10.219094  141746 main.go:141] libmachine: (no-preload-338118) Ensuring networks are active...
	I0420 01:26:10.219820  141746 main.go:141] libmachine: (no-preload-338118) Ensuring network default is active
	I0420 01:26:10.220181  141746 main.go:141] libmachine: (no-preload-338118) Ensuring network mk-no-preload-338118 is active
	I0420 01:26:10.220584  141746 main.go:141] libmachine: (no-preload-338118) Getting domain xml...
	I0420 01:26:10.221275  141746 main.go:141] libmachine: (no-preload-338118) Creating domain...
	I0420 01:26:08.363522  142057 pod_ready.go:102] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:09.858701  142057 pod_ready.go:92] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:09.858731  142057 pod_ready.go:81] duration metric: took 6.007793209s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:09.858742  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:08.743367  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.743867  142411 main.go:141] libmachine: (old-k8s-version-564860) Found IP for machine: 192.168.61.91
	I0420 01:26:08.743896  142411 main.go:141] libmachine: (old-k8s-version-564860) Reserving static IP address...
	I0420 01:26:08.743914  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has current primary IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.744294  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "old-k8s-version-564860", mac: "52:54:00:9d:63:09", ip: "192.168.61.91"} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.744324  142411 main.go:141] libmachine: (old-k8s-version-564860) Reserved static IP address: 192.168.61.91
	I0420 01:26:08.744344  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | skip adding static IP to network mk-old-k8s-version-564860 - found existing host DHCP lease matching {name: "old-k8s-version-564860", mac: "52:54:00:9d:63:09", ip: "192.168.61.91"}
	I0420 01:26:08.744368  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Getting to WaitForSSH function...
	I0420 01:26:08.744387  142411 main.go:141] libmachine: (old-k8s-version-564860) Waiting for SSH to be available...
	I0420 01:26:08.746714  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.747119  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.747155  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.747278  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Using SSH client type: external
	I0420 01:26:08.747314  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa (-rw-------)
	I0420 01:26:08.747346  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:26:08.747359  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | About to run SSH command:
	I0420 01:26:08.747373  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | exit 0
	I0420 01:26:08.877633  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | SSH cmd err, output: <nil>: 
	I0420 01:26:08.878016  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetConfigRaw
	I0420 01:26:08.878715  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:08.881556  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.881982  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.882028  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.882326  142411 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/config.json ...
	I0420 01:26:08.882586  142411 machine.go:94] provisionDockerMachine start ...
	I0420 01:26:08.882613  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:08.882853  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:08.885133  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.885479  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:08.885510  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:08.885647  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:08.885843  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:08.886029  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:08.886192  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:08.886403  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:08.886642  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:08.886657  142411 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:26:09.006625  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:26:09.006655  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.006914  142411 buildroot.go:166] provisioning hostname "old-k8s-version-564860"
	I0420 01:26:09.006940  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.007144  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.010016  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.010349  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.010374  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.010597  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.010841  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.011040  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.011235  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.011439  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.011682  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.011718  142411 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-564860 && echo "old-k8s-version-564860" | sudo tee /etc/hostname
	I0420 01:26:09.155581  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-564860
	
	I0420 01:26:09.155612  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.158583  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.159021  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.159068  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.159285  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.159519  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.159747  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.159933  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.160128  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.160362  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.160390  142411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-564860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-564860/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-564860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:26:09.288804  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:26:09.288834  142411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:26:09.288856  142411 buildroot.go:174] setting up certificates
	I0420 01:26:09.288867  142411 provision.go:84] configureAuth start
	I0420 01:26:09.288877  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetMachineName
	I0420 01:26:09.289286  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:09.292454  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.292884  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.292923  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.293076  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.295234  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.295537  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.295565  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.295675  142411 provision.go:143] copyHostCerts
	I0420 01:26:09.295747  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:26:09.295758  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:26:09.295811  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:26:09.295936  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:26:09.295951  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:26:09.295981  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:26:09.296063  142411 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:26:09.296075  142411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:26:09.296095  142411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:26:09.296154  142411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-564860 san=[127.0.0.1 192.168.61.91 localhost minikube old-k8s-version-564860]
	I0420 01:26:09.436313  142411 provision.go:177] copyRemoteCerts
	I0420 01:26:09.436373  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:26:09.436401  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.439316  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.439700  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.439743  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.439856  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.440057  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.440226  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.440360  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:09.529141  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:26:09.558376  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0420 01:26:09.586393  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:26:09.615274  142411 provision.go:87] duration metric: took 326.393984ms to configureAuth
	I0420 01:26:09.615300  142411 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:26:09.615501  142411 config.go:182] Loaded profile config "old-k8s-version-564860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0420 01:26:09.615590  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.618470  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.618905  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.618938  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.619141  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.619325  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.619505  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.619662  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.619862  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:09.620073  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:09.620091  142411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:26:09.924929  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:26:09.924958  142411 machine.go:97] duration metric: took 1.042352034s to provisionDockerMachine
	I0420 01:26:09.924973  142411 start.go:293] postStartSetup for "old-k8s-version-564860" (driver="kvm2")
	I0420 01:26:09.924985  142411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:26:09.925021  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:09.925441  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:26:09.925485  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:09.927985  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.928377  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:09.928407  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:09.928565  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:09.928770  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:09.928944  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:09.929114  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.020189  142411 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:26:10.025578  142411 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:26:10.025607  142411 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:26:10.025707  142411 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:26:10.025795  142411 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:26:10.025888  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:26:10.038138  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:10.065063  142411 start.go:296] duration metric: took 140.07164ms for postStartSetup
	I0420 01:26:10.065111  142411 fix.go:56] duration metric: took 24.94209431s for fixHost
	I0420 01:26:10.065139  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.068099  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.068493  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.068544  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.068697  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.068916  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.069114  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.069255  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.069455  142411 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:10.069662  142411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0420 01:26:10.069678  142411 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:26:10.190955  142411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576370.174630368
	
	I0420 01:26:10.190984  142411 fix.go:216] guest clock: 1713576370.174630368
	I0420 01:26:10.190994  142411 fix.go:229] Guest: 2024-04-20 01:26:10.174630368 +0000 UTC Remote: 2024-04-20 01:26:10.065116719 +0000 UTC m=+276.709087933 (delta=109.513649ms)
	I0420 01:26:10.191036  142411 fix.go:200] guest clock delta is within tolerance: 109.513649ms
	I0420 01:26:10.191044  142411 start.go:83] releasing machines lock for "old-k8s-version-564860", held for 25.068071712s
	I0420 01:26:10.191074  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.191368  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:10.194872  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.195333  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.195365  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.195510  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196060  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196253  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .DriverName
	I0420 01:26:10.196331  142411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:26:10.196375  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.196439  142411 ssh_runner.go:195] Run: cat /version.json
	I0420 01:26:10.196467  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHHostname
	I0420 01:26:10.199156  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199522  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199557  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.199572  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.199760  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.199975  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.200098  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:10.200137  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.200165  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:10.200326  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.200700  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHPort
	I0420 01:26:10.200857  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHKeyPath
	I0420 01:26:10.200992  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetSSHUsername
	I0420 01:26:10.201150  142411 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/old-k8s-version-564860/id_rsa Username:docker}
	I0420 01:26:10.283430  142411 ssh_runner.go:195] Run: systemctl --version
	I0420 01:26:10.310703  142411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:26:10.462457  142411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:26:10.470897  142411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:26:10.470993  142411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:26:10.489867  142411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:26:10.489899  142411 start.go:494] detecting cgroup driver to use...
	I0420 01:26:10.489996  142411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:26:10.512741  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:26:10.530013  142411 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:26:10.530077  142411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:26:10.548567  142411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:26:10.565645  142411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:26:10.693390  142411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:26:10.878889  142411 docker.go:233] disabling docker service ...
	I0420 01:26:10.878973  142411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:26:10.901233  142411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:26:10.915219  142411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:26:11.053815  142411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:26:11.201766  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:26:11.218569  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:26:11.240543  142411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0420 01:26:11.240604  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.253384  142411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:26:11.253460  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.268703  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.281575  142411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:11.296477  142411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:26:11.312458  142411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:26:11.328008  142411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:26:11.328076  142411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:26:11.349027  142411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:26:11.362064  142411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:11.500624  142411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:26:11.665985  142411 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:26:11.666061  142411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:26:11.672929  142411 start.go:562] Will wait 60s for crictl version
	I0420 01:26:11.673006  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:11.678398  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:26:11.727572  142411 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:26:11.727663  142411 ssh_runner.go:195] Run: crio --version
	I0420 01:26:11.760504  142411 ssh_runner.go:195] Run: crio --version
	I0420 01:26:11.803463  142411 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0420 01:26:11.804782  142411 main.go:141] libmachine: (old-k8s-version-564860) Calling .GetIP
	I0420 01:26:11.807755  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:11.808135  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:63:09", ip: ""} in network mk-old-k8s-version-564860: {Iface:virbr2 ExpiryTime:2024-04-20 02:15:31 +0000 UTC Type:0 Mac:52:54:00:9d:63:09 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:old-k8s-version-564860 Clientid:01:52:54:00:9d:63:09}
	I0420 01:26:11.808177  142411 main.go:141] libmachine: (old-k8s-version-564860) DBG | domain old-k8s-version-564860 has defined IP address 192.168.61.91 and MAC address 52:54:00:9d:63:09 in network mk-old-k8s-version-564860
	I0420 01:26:11.808396  142411 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0420 01:26:11.813653  142411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:11.830618  142411 kubeadm.go:877] updating cluster {Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:26:11.830793  142411 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 01:26:11.830874  142411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:11.889149  142411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:26:11.889218  142411 ssh_runner.go:195] Run: which lz4
	I0420 01:26:11.894461  142411 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0420 01:26:11.900427  142411 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0420 01:26:11.900456  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0420 01:26:10.183110  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:12.184209  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:11.636722  141746 main.go:141] libmachine: (no-preload-338118) Waiting to get IP...
	I0420 01:26:11.637635  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:11.638048  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:11.638135  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:11.638011  143635 retry.go:31] will retry after 264.135122ms: waiting for machine to come up
	I0420 01:26:11.903486  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:11.904008  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:11.904053  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:11.903958  143635 retry.go:31] will retry after 367.952741ms: waiting for machine to come up
	I0420 01:26:12.273951  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:12.274547  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:12.274584  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:12.274491  143635 retry.go:31] will retry after 390.958735ms: waiting for machine to come up
	I0420 01:26:12.667348  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:12.667888  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:12.667915  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:12.667820  143635 retry.go:31] will retry after 554.212994ms: waiting for machine to come up
	I0420 01:26:13.223423  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:13.224158  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:13.224184  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:13.224058  143635 retry.go:31] will retry after 686.102207ms: waiting for machine to come up
	I0420 01:26:13.911430  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:13.912019  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:13.912042  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:13.911968  143635 retry.go:31] will retry after 875.263983ms: waiting for machine to come up
	I0420 01:26:14.788949  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:14.789431  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:14.789481  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:14.789392  143635 retry.go:31] will retry after 847.129796ms: waiting for machine to come up
	I0420 01:26:15.637863  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:15.638348  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:15.638379  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:15.638288  143635 retry.go:31] will retry after 1.162423805s: waiting for machine to come up
	I0420 01:26:11.866297  142057 pod_ready.go:102] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:13.868499  142057 pod_ready.go:102] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:14.867208  142057 pod_ready.go:92] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.867241  142057 pod_ready.go:81] duration metric: took 5.008488667s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.867254  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.875100  142057 pod_ready.go:92] pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.875119  142057 pod_ready.go:81] duration metric: took 7.856647ms for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.875131  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-crzk6" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.880630  142057 pod_ready.go:92] pod "kube-proxy-crzk6" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.880651  142057 pod_ready.go:81] duration metric: took 5.512379ms for pod "kube-proxy-crzk6" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.880661  142057 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.885625  142057 pod_ready.go:92] pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:26:14.885645  142057 pod_ready.go:81] duration metric: took 4.976632ms for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.885656  142057 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" ...
	I0420 01:26:14.031960  142411 crio.go:462] duration metric: took 2.137532848s to copy over tarball
	I0420 01:26:14.032043  142411 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0420 01:26:17.581625  142411 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.549548059s)
	I0420 01:26:17.581660  142411 crio.go:469] duration metric: took 3.549666471s to extract the tarball
	I0420 01:26:17.581672  142411 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0420 01:26:17.633172  142411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:17.679514  142411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0420 01:26:17.679544  142411 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0420 01:26:17.679710  142411 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.679940  142411 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.680051  142411 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.680061  142411 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.680225  142411 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.680266  142411 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0420 01:26:17.680442  142411 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.680516  142411 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.682336  142411 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.682425  142411 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0420 01:26:17.682428  142411 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.682462  142411 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.682341  142411 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.682512  142411 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.682952  142411 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.682955  142411 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.846602  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.850673  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.866812  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:17.871983  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:17.876346  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0420 01:26:17.876745  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:17.881269  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:17.985788  142411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:17.997662  142411 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0420 01:26:17.997709  142411 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0420 01:26:17.997716  142411 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:17.997751  142411 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0420 01:26:17.997778  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:17.997797  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.071610  142411 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0420 01:26:18.071682  142411 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:18.071705  142411 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0420 01:26:18.071741  142411 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:18.071760  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.071793  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.085631  142411 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0420 01:26:18.085689  142411 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0420 01:26:18.085748  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.087239  142411 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0420 01:26:18.087288  142411 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:18.087362  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.094891  142411 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0420 01:26:18.094940  142411 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:18.094989  142411 ssh_runner.go:195] Run: which crictl
	I0420 01:26:18.232524  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0420 01:26:18.232595  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0420 01:26:18.232613  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0420 01:26:18.232649  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0420 01:26:18.232595  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0420 01:26:18.232682  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0420 01:26:18.232710  142411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0420 01:26:14.684499  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:17.185481  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:16.802494  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:16.802977  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:16.803009  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:16.802908  143635 retry.go:31] will retry after 1.370900633s: waiting for machine to come up
	I0420 01:26:18.175474  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:18.175996  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:18.176022  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:18.175943  143635 retry.go:31] will retry after 1.698879408s: waiting for machine to come up
	I0420 01:26:19.876437  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:19.876901  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:19.876932  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:19.876843  143635 retry.go:31] will retry after 2.622833508s: waiting for machine to come up
	I0420 01:26:16.894119  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:18.894941  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:18.408724  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0420 01:26:18.408791  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0420 01:26:18.410041  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0420 01:26:18.410136  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0420 01:26:18.424042  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0420 01:26:18.428203  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0420 01:26:18.428295  142411 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0420 01:26:18.450170  142411 cache_images.go:92] duration metric: took 770.600266ms to LoadCachedImages
	W0420 01:26:18.450288  142411 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0420 01:26:18.450305  142411 kubeadm.go:928] updating node { 192.168.61.91 8443 v1.20.0 crio true true} ...
	I0420 01:26:18.450428  142411 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-564860 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:26:18.450522  142411 ssh_runner.go:195] Run: crio config
	I0420 01:26:18.503362  142411 cni.go:84] Creating CNI manager for ""
	I0420 01:26:18.503407  142411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:18.503427  142411 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:26:18.503463  142411 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-564860 NodeName:old-k8s-version-564860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0420 01:26:18.503671  142411 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-564860"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:26:18.503745  142411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0420 01:26:18.516393  142411 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:26:18.516475  142411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:26:18.529038  142411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0420 01:26:18.550442  142411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:26:18.572012  142411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0420 01:26:18.595682  142411 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I0420 01:26:18.602036  142411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:18.622226  142411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:18.774466  142411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:26:18.795074  142411 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860 for IP: 192.168.61.91
	I0420 01:26:18.795104  142411 certs.go:194] generating shared ca certs ...
	I0420 01:26:18.795125  142411 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:18.795301  142411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:26:18.795342  142411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:26:18.795352  142411 certs.go:256] generating profile certs ...
	I0420 01:26:18.795433  142411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/client.key
	I0420 01:26:18.795487  142411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key.d235183f
	I0420 01:26:18.795524  142411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key
	I0420 01:26:18.795645  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:26:18.795675  142411 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:26:18.795685  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:26:18.795706  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:26:18.795735  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:26:18.795765  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:26:18.795828  142411 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:18.796607  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:26:18.845581  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:26:18.891065  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:26:18.933536  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:26:18.977381  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0420 01:26:19.009816  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:26:19.042053  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:26:19.090614  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/old-k8s-version-564860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0420 01:26:19.119554  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:26:19.147545  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:26:19.177775  142411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:26:19.211008  142411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:26:19.234399  142411 ssh_runner.go:195] Run: openssl version
	I0420 01:26:19.242808  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:26:19.256132  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.261681  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.261739  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:26:19.270546  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:26:19.284112  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:26:19.296998  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.302497  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.302551  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:26:19.310883  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:26:19.325130  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:26:19.338964  142411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.344915  142411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.344986  142411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:19.351926  142411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:26:19.366428  142411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:26:19.372391  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:26:19.379606  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:26:19.386698  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:26:19.395102  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:26:19.401981  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:26:19.409477  142411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:26:19.416444  142411 kubeadm.go:391] StartCluster: {Name:old-k8s-version-564860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-564860 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:26:19.416557  142411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:26:19.416600  142411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:19.460782  142411 cri.go:89] found id: ""
	I0420 01:26:19.460884  142411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:26:19.473812  142411 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:26:19.473832  142411 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:26:19.473838  142411 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:26:19.473899  142411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:26:19.486686  142411 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:26:19.487757  142411 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-564860" does not appear in /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:26:19.488411  142411 kubeconfig.go:62] /home/jenkins/minikube-integration/18703-76456/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-564860" cluster setting kubeconfig missing "old-k8s-version-564860" context setting]
	I0420 01:26:19.489438  142411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:19.491237  142411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:26:19.503483  142411 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.91
	I0420 01:26:19.503519  142411 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:26:19.503530  142411 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:26:19.503597  142411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:19.546350  142411 cri.go:89] found id: ""
	I0420 01:26:19.546438  142411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:26:19.568177  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:26:19.580545  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:26:19.580573  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:26:19.580658  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:26:19.592945  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:26:19.593010  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:26:19.605598  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:26:19.617261  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:26:19.617346  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:26:19.629242  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:26:19.640143  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:26:19.640211  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:26:19.654226  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:26:19.666207  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:26:19.666275  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:26:19.678899  142411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:26:19.694374  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:19.845435  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:20.619142  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:20.891265  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:21.020834  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:21.124545  142411 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:26:21.124652  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:21.625462  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:22.125171  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:22.625565  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:23.125077  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:19.685129  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:22.183561  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:22.502227  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:22.502665  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:22.502696  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:22.502603  143635 retry.go:31] will retry after 3.3877716s: waiting for machine to come up
	I0420 01:26:21.392042  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:23.392579  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:25.394230  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:23.625392  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.125446  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.625035  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:25.125592  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:25.624718  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:26.124803  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:26.625420  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:27.125162  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:27.625475  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:28.125637  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:24.685014  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:27.182545  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:25.891769  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:25.892321  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:25.892353  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:25.892252  143635 retry.go:31] will retry after 3.395760477s: waiting for machine to come up
	I0420 01:26:29.290361  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:29.290858  141746 main.go:141] libmachine: (no-preload-338118) DBG | unable to find current IP address of domain no-preload-338118 in network mk-no-preload-338118
	I0420 01:26:29.290907  141746 main.go:141] libmachine: (no-preload-338118) DBG | I0420 01:26:29.290791  143635 retry.go:31] will retry after 4.86761736s: waiting for machine to come up
	I0420 01:26:27.892903  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:30.392680  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:28.625781  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.125145  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.625647  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:30.125081  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:30.625404  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:31.124753  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:31.625565  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:32.124750  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:32.624841  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:33.125120  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:29.682707  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:31.682790  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:33.683549  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:34.162306  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.162883  141746 main.go:141] libmachine: (no-preload-338118) Found IP for machine: 192.168.72.89
	I0420 01:26:34.162912  141746 main.go:141] libmachine: (no-preload-338118) Reserving static IP address...
	I0420 01:26:34.162928  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has current primary IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.163266  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "no-preload-338118", mac: "52:54:00:14:65:26", ip: "192.168.72.89"} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.163296  141746 main.go:141] libmachine: (no-preload-338118) Reserved static IP address: 192.168.72.89
	I0420 01:26:34.163316  141746 main.go:141] libmachine: (no-preload-338118) DBG | skip adding static IP to network mk-no-preload-338118 - found existing host DHCP lease matching {name: "no-preload-338118", mac: "52:54:00:14:65:26", ip: "192.168.72.89"}
	I0420 01:26:34.163335  141746 main.go:141] libmachine: (no-preload-338118) DBG | Getting to WaitForSSH function...
	I0420 01:26:34.163350  141746 main.go:141] libmachine: (no-preload-338118) Waiting for SSH to be available...
	I0420 01:26:34.165641  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.165947  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.165967  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.166136  141746 main.go:141] libmachine: (no-preload-338118) DBG | Using SSH client type: external
	I0420 01:26:34.166161  141746 main.go:141] libmachine: (no-preload-338118) DBG | Using SSH private key: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa (-rw-------)
	I0420 01:26:34.166190  141746 main.go:141] libmachine: (no-preload-338118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0420 01:26:34.166216  141746 main.go:141] libmachine: (no-preload-338118) DBG | About to run SSH command:
	I0420 01:26:34.166232  141746 main.go:141] libmachine: (no-preload-338118) DBG | exit 0
	I0420 01:26:34.293435  141746 main.go:141] libmachine: (no-preload-338118) DBG | SSH cmd err, output: <nil>: 
	I0420 01:26:34.293789  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetConfigRaw
	I0420 01:26:34.294381  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:34.296958  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.297355  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.297391  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.297670  141746 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/config.json ...
	I0420 01:26:34.297915  141746 machine.go:94] provisionDockerMachine start ...
	I0420 01:26:34.297945  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:34.298191  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.300645  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.301042  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.301068  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.301280  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.301496  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.301719  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.301895  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.302104  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.302272  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.302284  141746 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:26:34.419082  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0420 01:26:34.419113  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.419424  141746 buildroot.go:166] provisioning hostname "no-preload-338118"
	I0420 01:26:34.419452  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.419715  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.422630  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.423010  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.423052  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.423212  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.423415  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.423599  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.423716  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.423928  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.424135  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.424149  141746 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-338118 && echo "no-preload-338118" | sudo tee /etc/hostname
	I0420 01:26:34.555223  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-338118
	
	I0420 01:26:34.555254  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.558217  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.558606  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.558643  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.558792  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.558999  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.559241  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.559423  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.559655  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:34.559827  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:34.559844  141746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-338118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-338118/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-338118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:26:34.684192  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:26:34.684226  141746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18703-76456/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-76456/.minikube}
	I0420 01:26:34.684261  141746 buildroot.go:174] setting up certificates
	I0420 01:26:34.684270  141746 provision.go:84] configureAuth start
	I0420 01:26:34.684289  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetMachineName
	I0420 01:26:34.684581  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:34.687363  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.687703  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.687733  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.687876  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.690220  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.690542  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.690569  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.690739  141746 provision.go:143] copyHostCerts
	I0420 01:26:34.690806  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem, removing ...
	I0420 01:26:34.690817  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem
	I0420 01:26:34.690869  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/ca.pem (1078 bytes)
	I0420 01:26:34.691006  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem, removing ...
	I0420 01:26:34.691017  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem
	I0420 01:26:34.691038  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/cert.pem (1123 bytes)
	I0420 01:26:34.691103  141746 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem, removing ...
	I0420 01:26:34.691111  141746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem
	I0420 01:26:34.691130  141746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-76456/.minikube/key.pem (1675 bytes)
	I0420 01:26:34.691178  141746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem org=jenkins.no-preload-338118 san=[127.0.0.1 192.168.72.89 localhost minikube no-preload-338118]
	I0420 01:26:34.899595  141746 provision.go:177] copyRemoteCerts
	I0420 01:26:34.899652  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:26:34.899676  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:34.902298  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.902745  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:34.902777  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:34.902956  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:34.903150  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:34.903309  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:34.903457  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:34.993263  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0420 01:26:35.024837  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0420 01:26:35.054254  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 01:26:35.082455  141746 provision.go:87] duration metric: took 398.171071ms to configureAuth
	I0420 01:26:35.082488  141746 buildroot.go:189] setting minikube options for container-runtime
	I0420 01:26:35.082741  141746 config.go:182] Loaded profile config "no-preload-338118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:26:35.082822  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.085868  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.086264  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.086313  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.086481  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.086708  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.086868  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.087051  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.087254  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:35.087424  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:35.087440  141746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:26:35.374277  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:26:35.374305  141746 machine.go:97] duration metric: took 1.076369907s to provisionDockerMachine
	I0420 01:26:35.374327  141746 start.go:293] postStartSetup for "no-preload-338118" (driver="kvm2")
	I0420 01:26:35.374342  141746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:26:35.374366  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.374733  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:26:35.374787  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.378647  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.378998  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.379038  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.379149  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.379353  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.379518  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.379694  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:35.468711  141746 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:26:35.473783  141746 info.go:137] Remote host: Buildroot 2023.02.9
	I0420 01:26:35.473808  141746 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/addons for local assets ...
	I0420 01:26:35.473929  141746 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-76456/.minikube/files for local assets ...
	I0420 01:26:35.474088  141746 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem -> 837422.pem in /etc/ssl/certs
	I0420 01:26:35.474217  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:26:35.484161  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:35.511695  141746 start.go:296] duration metric: took 137.354669ms for postStartSetup
	I0420 01:26:35.511751  141746 fix.go:56] duration metric: took 25.320502022s for fixHost
	I0420 01:26:35.511780  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.514635  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.515042  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.515067  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.515247  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.515448  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.515663  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.515814  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.515988  141746 main.go:141] libmachine: Using SSH client type: native
	I0420 01:26:35.516218  141746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I0420 01:26:35.516240  141746 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0420 01:26:35.632029  141746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576395.615634246
	
	I0420 01:26:35.632057  141746 fix.go:216] guest clock: 1713576395.615634246
	I0420 01:26:35.632067  141746 fix.go:229] Guest: 2024-04-20 01:26:35.615634246 +0000 UTC Remote: 2024-04-20 01:26:35.511757232 +0000 UTC m=+369.861721674 (delta=103.877014ms)
	I0420 01:26:35.632113  141746 fix.go:200] guest clock delta is within tolerance: 103.877014ms
	I0420 01:26:35.632137  141746 start.go:83] releasing machines lock for "no-preload-338118", held for 25.440933699s
	I0420 01:26:35.632168  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.632486  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:35.635888  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.636400  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.636440  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.636751  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637250  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637448  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:26:35.637547  141746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:26:35.637597  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.637694  141746 ssh_runner.go:195] Run: cat /version.json
	I0420 01:26:35.637720  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:26:35.640562  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.640800  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.640953  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.640969  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.641244  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.641389  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:35.641433  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.641486  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:35.641644  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.641670  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:26:35.641806  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:35.641873  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:26:35.641997  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:26:35.642163  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:26:32.892859  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:34.893134  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:35.749528  141746 ssh_runner.go:195] Run: systemctl --version
	I0420 01:26:35.756960  141746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:26:35.912075  141746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0420 01:26:35.920264  141746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0420 01:26:35.920355  141746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:26:35.937729  141746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0420 01:26:35.937753  141746 start.go:494] detecting cgroup driver to use...
	I0420 01:26:35.937811  141746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:26:35.954425  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:26:35.970967  141746 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:26:35.971023  141746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:26:35.986186  141746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:26:36.000803  141746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:26:36.114673  141746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:26:36.273386  141746 docker.go:233] disabling docker service ...
	I0420 01:26:36.273472  141746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:26:36.290471  141746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:26:36.305722  141746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:26:36.459528  141746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:26:36.609105  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:26:36.627255  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:26:36.651459  141746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:26:36.651535  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.663171  141746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:26:36.663255  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.674706  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.686196  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.697909  141746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:26:36.709625  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.720746  141746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.740333  141746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:26:36.752898  141746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:26:36.764600  141746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0420 01:26:36.764653  141746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0420 01:26:36.780697  141746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:26:36.791440  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:36.936761  141746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:26:37.095374  141746 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:26:37.095475  141746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:26:37.101601  141746 start.go:562] Will wait 60s for crictl version
	I0420 01:26:37.101673  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.106191  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:26:37.152257  141746 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0420 01:26:37.152361  141746 ssh_runner.go:195] Run: crio --version
	I0420 01:26:37.187172  141746 ssh_runner.go:195] Run: crio --version
	I0420 01:26:37.225203  141746 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0420 01:26:33.625596  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:34.124972  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:34.624791  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:35.125630  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:35.624815  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.125677  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.625631  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:37.125592  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:37.624883  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:38.124924  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:36.183893  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:38.184381  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:37.226708  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetIP
	I0420 01:26:37.229679  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:37.230090  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:26:37.230131  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:26:37.230253  141746 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0420 01:26:37.234914  141746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:37.249029  141746 kubeadm.go:877] updating cluster {Name:no-preload-338118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:26:37.249155  141746 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:26:37.249208  141746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:26:37.287235  141746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0420 01:26:37.287270  141746 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0420 01:26:37.287341  141746 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.287379  141746 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.287387  141746 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.287363  141746 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.287414  141746 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.287378  141746 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.287399  141746 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.287365  141746 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0420 01:26:37.288833  141746 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.288849  141746 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.288863  141746 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.288922  141746 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.288933  141746 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.288831  141746 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.288957  141746 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0420 01:26:37.288985  141746 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.452705  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.462178  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.463495  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0420 01:26:37.469562  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.480726  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.501069  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.517291  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.533934  141746 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0420 01:26:37.533976  141746 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.534032  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.578341  141746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.602332  141746 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0420 01:26:37.602381  141746 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.602432  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.718979  141746 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0420 01:26:37.719028  141746 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0420 01:26:37.719065  141746 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0420 01:26:37.719093  141746 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.719100  141746 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0420 01:26:37.719126  141746 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.719153  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719220  141746 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0420 01:26:37.719256  141746 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.719067  141746 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.719155  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719306  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0420 01:26:37.719309  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719036  141746 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.719369  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.719154  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0420 01:26:37.719297  141746 ssh_runner.go:195] Run: which crictl
	I0420 01:26:37.733974  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0420 01:26:37.802462  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0420 01:26:37.802496  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0420 01:26:37.802544  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0420 01:26:37.802575  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0420 01:26:37.802637  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:26:37.802648  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:37.802648  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.802708  141746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0420 01:26:37.802725  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0420 01:26:37.802788  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:37.897150  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0420 01:26:37.897190  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0420 01:26:37.897259  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:37.897268  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0420 01:26:37.897278  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:37.897285  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.897295  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0420 01:26:37.897337  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0420 01:26:37.902046  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0420 01:26:37.902094  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0420 01:26:37.902151  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:37.902307  141746 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0420 01:26:37.902399  141746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:37.914016  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0420 01:26:40.184815  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.287511777s)
	I0420 01:26:40.184859  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0420 01:26:40.184918  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.282742718s)
	I0420 01:26:40.184951  141746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.282534359s)
	I0420 01:26:40.184974  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0420 01:26:40.184981  141746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0420 01:26:40.185052  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.287690505s)
	I0420 01:26:40.185081  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0420 01:26:40.185113  141746 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:40.185175  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0420 01:26:37.392757  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:39.394094  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:38.624766  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:39.125330  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:39.624953  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.125409  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.625125  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:41.125460  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:41.625041  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:42.125103  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:42.624948  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:43.125237  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:40.186531  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:42.683524  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:42.252666  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.067465398s)
	I0420 01:26:42.252710  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0420 01:26:42.252735  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:42.252774  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0420 01:26:44.616564  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.363755421s)
	I0420 01:26:44.616614  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0420 01:26:44.616649  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:44.616713  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0420 01:26:41.394300  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:43.895493  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:43.625155  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:44.124986  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:44.624957  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.125834  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.625359  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:46.125706  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:46.625115  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:47.125204  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:47.625746  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:48.124803  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:45.183628  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:47.684002  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:46.894590  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.277850916s)
	I0420 01:26:46.894626  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0420 01:26:46.894655  141746 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:46.894712  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0420 01:26:49.158327  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.263583483s)
	I0420 01:26:49.158370  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0420 01:26:49.158406  141746 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:49.158478  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0420 01:26:50.223297  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.06478687s)
	I0420 01:26:50.223344  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0420 01:26:50.223382  141746 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:50.223452  141746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0420 01:26:46.393020  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:48.394414  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:50.893840  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:48.624957  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:49.125441  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:49.625078  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.124787  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.624817  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:51.125211  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:51.625408  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:52.124903  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:52.624826  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:53.124728  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:50.183173  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:52.183563  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:54.187354  141746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.963876859s)
	I0420 01:26:54.187388  141746 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18703-76456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0420 01:26:54.187416  141746 cache_images.go:123] Successfully loaded all cached images
	I0420 01:26:54.187426  141746 cache_images.go:92] duration metric: took 16.900140079s to LoadCachedImages
	I0420 01:26:54.187439  141746 kubeadm.go:928] updating node { 192.168.72.89 8443 v1.30.0 crio true true} ...
	I0420 01:26:54.187545  141746 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-338118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:26:54.187608  141746 ssh_runner.go:195] Run: crio config
	I0420 01:26:54.245888  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:26:54.245914  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:26:54.245928  141746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:26:54.245954  141746 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.89 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-338118 NodeName:no-preload-338118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:26:54.246153  141746 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-338118"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:26:54.246232  141746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:26:54.259262  141746 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:26:54.259360  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 01:26:54.270769  141746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0420 01:26:54.290436  141746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:26:54.311846  141746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0420 01:26:54.332517  141746 ssh_runner.go:195] Run: grep 192.168.72.89	control-plane.minikube.internal$ /etc/hosts
	I0420 01:26:54.336874  141746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:26:54.350084  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:26:54.466328  141746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:26:54.484511  141746 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118 for IP: 192.168.72.89
	I0420 01:26:54.484545  141746 certs.go:194] generating shared ca certs ...
	I0420 01:26:54.484609  141746 certs.go:226] acquiring lock for ca certs: {Name:mk8b05008ea79296d881c722adfabc65a57f02ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:26:54.484846  141746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key
	I0420 01:26:54.484960  141746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key
	I0420 01:26:54.484996  141746 certs.go:256] generating profile certs ...
	I0420 01:26:54.485165  141746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/client.key
	I0420 01:26:54.485273  141746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.key.f8d917a4
	I0420 01:26:54.485353  141746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.key
	I0420 01:26:54.485543  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem (1338 bytes)
	W0420 01:26:54.485604  141746 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742_empty.pem, impossibly tiny 0 bytes
	I0420 01:26:54.485622  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca-key.pem (1675 bytes)
	I0420 01:26:54.485667  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/ca.pem (1078 bytes)
	I0420 01:26:54.485707  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:26:54.485741  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/certs/key.pem (1675 bytes)
	I0420 01:26:54.485804  141746 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem (1708 bytes)
	I0420 01:26:54.486486  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:26:54.539867  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:26:54.575443  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:26:54.609857  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0420 01:26:54.638338  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0420 01:26:54.672043  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:26:54.704197  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:26:54.733771  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/no-preload-338118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0420 01:26:54.761911  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/certs/83742.pem --> /usr/share/ca-certificates/83742.pem (1338 bytes)
	I0420 01:26:54.789278  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/ssl/certs/837422.pem --> /usr/share/ca-certificates/837422.pem (1708 bytes)
	I0420 01:26:54.816890  141746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-76456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:26:54.845884  141746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:26:54.864508  141746 ssh_runner.go:195] Run: openssl version
	I0420 01:26:54.870717  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83742.pem && ln -fs /usr/share/ca-certificates/83742.pem /etc/ssl/certs/83742.pem"
	I0420 01:26:54.883192  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.888532  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:09 /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.888588  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83742.pem
	I0420 01:26:54.895258  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83742.pem /etc/ssl/certs/51391683.0"
	I0420 01:26:54.907346  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/837422.pem && ln -fs /usr/share/ca-certificates/837422.pem /etc/ssl/certs/837422.pem"
	I0420 01:26:54.919360  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.924700  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:09 /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.924773  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/837422.pem
	I0420 01:26:54.931133  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/837422.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:26:54.942845  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:26:54.954785  141746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.959769  141746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 23:57 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.959856  141746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:26:54.966061  141746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:26:54.978389  141746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:26:54.983591  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:26:54.990157  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:26:54.996977  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:26:55.004103  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:26:55.010928  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:26:55.018024  141746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:26:55.024639  141746 kubeadm.go:391] StartCluster: {Name:no-preload-338118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-338118 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:26:55.024733  141746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:26:55.024784  141746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:55.073888  141746 cri.go:89] found id: ""
	I0420 01:26:55.073954  141746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:26:55.087179  141746 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:26:55.087199  141746 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:26:55.087208  141746 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:26:55.087255  141746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:26:55.098975  141746 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:26:55.100487  141746 kubeconfig.go:125] found "no-preload-338118" server: "https://192.168.72.89:8443"
	I0420 01:26:55.103557  141746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:26:55.114871  141746 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.89
	I0420 01:26:55.114900  141746 kubeadm.go:1154] stopping kube-system containers ...
	I0420 01:26:55.114914  141746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0420 01:26:55.114983  141746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:26:55.174863  141746 cri.go:89] found id: ""
	I0420 01:26:55.174969  141746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0420 01:26:55.192867  141746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:26:55.203842  141746 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:26:55.203866  141746 kubeadm.go:156] found existing configuration files:
	
	I0420 01:26:55.203919  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:26:55.214476  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:26:55.214534  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:26:55.224728  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:26:55.235353  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:26:55.235403  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:26:55.245905  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:26:55.256614  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:26:55.256678  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:26:55.266909  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:26:55.276249  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:26:55.276294  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:26:55.285758  141746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:26:55.295896  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:55.418331  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:53.394623  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:55.893492  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:53.625614  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.125487  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.625414  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:55.125150  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:55.624831  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:56.125438  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:56.625450  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.125591  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.625757  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:58.124963  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:54.186686  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:56.681991  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:58.682958  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:56.156484  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.376987  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.450655  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:26:56.517915  141746 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:26:56.518018  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.018277  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.518215  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:57.538017  141746 api_server.go:72] duration metric: took 1.020104679s to wait for apiserver process to appear ...
	I0420 01:26:57.538045  141746 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:26:57.538070  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:26:58.392944  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:00.892688  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:26:58.625549  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:59.125177  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:26:59.624704  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:00.125709  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:00.625346  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.124849  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.624947  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:02.125407  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:02.625704  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:03.125695  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:01.182564  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:03.183451  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:02.538442  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:02.538498  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:03.396891  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:05.896375  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:03.625423  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:04.124806  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:04.625232  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.124917  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.624983  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:06.124851  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:06.625029  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:07.125554  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:07.625163  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:08.125455  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:05.682216  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:07.683636  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:07.538926  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:07.538973  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:08.392765  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:10.392933  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:08.625100  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:09.125395  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:09.625454  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.125615  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.624892  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:11.125366  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:11.625074  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:12.125165  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:12.625629  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:13.124824  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:10.182884  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:12.683893  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:12.540046  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:12.540121  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:12.393561  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:14.893756  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:13.625040  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:14.125511  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:14.624890  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.125622  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.625393  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:16.125215  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:16.625561  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:17.125263  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:17.624772  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:18.125597  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:15.183734  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:17.683742  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:17.540652  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:17.540701  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.076616  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": read tcp 192.168.72.1:34174->192.168.72.89:8443: read: connection reset by peer
	I0420 01:27:18.076671  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.077186  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": dial tcp 192.168.72.89:8443: connect: connection refused
	I0420 01:27:18.538798  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:18.539454  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": dial tcp 192.168.72.89:8443: connect: connection refused
	I0420 01:27:19.039080  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:17.393196  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:19.395273  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:18.624948  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:19.124956  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:19.625579  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:20.124827  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:20.625212  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:21.125476  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:21.125553  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:21.174633  142411 cri.go:89] found id: ""
	I0420 01:27:21.174668  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.174679  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:21.174686  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:21.174767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:21.218230  142411 cri.go:89] found id: ""
	I0420 01:27:21.218263  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.218275  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:21.218284  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:21.218369  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:21.258886  142411 cri.go:89] found id: ""
	I0420 01:27:21.258916  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.258926  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:21.258932  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:21.259003  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:21.306725  142411 cri.go:89] found id: ""
	I0420 01:27:21.306758  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.306769  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:21.306777  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:21.306843  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:21.349049  142411 cri.go:89] found id: ""
	I0420 01:27:21.349086  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.349098  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:21.349106  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:21.349174  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:21.392312  142411 cri.go:89] found id: ""
	I0420 01:27:21.392338  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.392346  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:21.392352  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:21.392425  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:21.434121  142411 cri.go:89] found id: ""
	I0420 01:27:21.434148  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.434156  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:21.434162  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:21.434210  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:21.473728  142411 cri.go:89] found id: ""
	I0420 01:27:21.473754  142411 logs.go:276] 0 containers: []
	W0420 01:27:21.473762  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:21.473772  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:21.473785  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:21.537607  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:21.537648  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:21.554563  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:21.554604  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:21.674778  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:21.674803  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:21.674829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:21.740625  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:21.740666  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:20.182461  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:22.682574  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:24.039641  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:24.039690  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:21.397381  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:23.893642  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:24.284890  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:24.301486  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:24.301571  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:24.340987  142411 cri.go:89] found id: ""
	I0420 01:27:24.341012  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.341021  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:24.341026  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:24.341102  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:24.379983  142411 cri.go:89] found id: ""
	I0420 01:27:24.380014  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.380024  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:24.380029  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:24.380113  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:24.438700  142411 cri.go:89] found id: ""
	I0420 01:27:24.438729  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.438739  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:24.438745  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:24.438795  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:24.487761  142411 cri.go:89] found id: ""
	I0420 01:27:24.487793  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.487802  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:24.487808  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:24.487870  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:24.529408  142411 cri.go:89] found id: ""
	I0420 01:27:24.529439  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.529448  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:24.529453  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:24.529523  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:24.572782  142411 cri.go:89] found id: ""
	I0420 01:27:24.572817  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.572831  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:24.572841  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:24.572910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:24.620651  142411 cri.go:89] found id: ""
	I0420 01:27:24.620684  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.620696  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:24.620704  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:24.620769  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:24.659481  142411 cri.go:89] found id: ""
	I0420 01:27:24.659513  142411 logs.go:276] 0 containers: []
	W0420 01:27:24.659525  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:24.659537  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:24.659552  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:24.714483  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:24.714517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:24.730279  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:24.730316  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:24.804883  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:24.804909  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:24.804926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:24.879557  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:24.879602  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:27.431026  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:27.448112  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:27.448176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:27.494959  142411 cri.go:89] found id: ""
	I0420 01:27:27.494988  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.494999  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:27.495007  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:27.495075  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:27.532023  142411 cri.go:89] found id: ""
	I0420 01:27:27.532055  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.532066  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:27.532075  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:27.532151  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:27.578551  142411 cri.go:89] found id: ""
	I0420 01:27:27.578600  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.578613  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:27.578621  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:27.578692  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:27.618248  142411 cri.go:89] found id: ""
	I0420 01:27:27.618277  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.618288  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:27.618296  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:27.618363  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:27.655682  142411 cri.go:89] found id: ""
	I0420 01:27:27.655714  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.655723  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:27.655729  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:27.655787  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:27.696355  142411 cri.go:89] found id: ""
	I0420 01:27:27.696389  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.696400  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:27.696408  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:27.696478  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:27.735354  142411 cri.go:89] found id: ""
	I0420 01:27:27.735378  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.735396  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:27.735402  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:27.735460  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:27.775234  142411 cri.go:89] found id: ""
	I0420 01:27:27.775261  142411 logs.go:276] 0 containers: []
	W0420 01:27:27.775269  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:27.775277  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:27.775294  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:27.789970  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:27.790005  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:27.873345  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:27.873371  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:27.873387  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:27.952309  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:27.952353  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:28.003746  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:28.003792  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:24.683122  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:27.182311  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:29.040691  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:29.040743  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:26.394161  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:28.893349  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:30.893785  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:30.555691  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:30.570962  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:30.571041  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:30.613185  142411 cri.go:89] found id: ""
	I0420 01:27:30.613218  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.613227  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:30.613233  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:30.613291  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:30.654494  142411 cri.go:89] found id: ""
	I0420 01:27:30.654520  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.654529  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:30.654535  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:30.654600  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:30.702605  142411 cri.go:89] found id: ""
	I0420 01:27:30.702634  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.702646  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:30.702653  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:30.702719  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:30.742072  142411 cri.go:89] found id: ""
	I0420 01:27:30.742104  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.742115  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:30.742123  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:30.742191  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:30.793199  142411 cri.go:89] found id: ""
	I0420 01:27:30.793232  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.793244  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:30.793252  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:30.793340  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:30.832978  142411 cri.go:89] found id: ""
	I0420 01:27:30.833019  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.833034  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:30.833044  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:30.833126  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:30.875606  142411 cri.go:89] found id: ""
	I0420 01:27:30.875641  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.875655  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:30.875662  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:30.875729  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:30.917288  142411 cri.go:89] found id: ""
	I0420 01:27:30.917335  142411 logs.go:276] 0 containers: []
	W0420 01:27:30.917348  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:30.917360  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:30.917375  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:30.996446  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:30.996469  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:30.996485  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:31.080494  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:31.080543  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:31.141226  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:31.141260  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:31.212808  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:31.212845  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:29.182651  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:31.183179  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:33.682476  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:34.041737  141746 api_server.go:269] stopped: https://192.168.72.89:8443/healthz: Get "https://192.168.72.89:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0420 01:27:34.041789  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:33.393756  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:35.395120  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:33.728927  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:33.745749  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:33.745835  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:33.788813  142411 cri.go:89] found id: ""
	I0420 01:27:33.788845  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.788859  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:33.788868  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:33.788936  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:33.834918  142411 cri.go:89] found id: ""
	I0420 01:27:33.834948  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.834957  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:33.834963  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:33.835026  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:33.873928  142411 cri.go:89] found id: ""
	I0420 01:27:33.873960  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.873972  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:33.873977  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:33.874027  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:33.921462  142411 cri.go:89] found id: ""
	I0420 01:27:33.921497  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.921510  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:33.921519  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:33.921606  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:33.962280  142411 cri.go:89] found id: ""
	I0420 01:27:33.962308  142411 logs.go:276] 0 containers: []
	W0420 01:27:33.962320  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:33.962329  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:33.962390  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:34.002582  142411 cri.go:89] found id: ""
	I0420 01:27:34.002616  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.002627  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:34.002635  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:34.002707  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:34.047383  142411 cri.go:89] found id: ""
	I0420 01:27:34.047410  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.047421  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:34.047428  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:34.047489  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:34.088296  142411 cri.go:89] found id: ""
	I0420 01:27:34.088341  142411 logs.go:276] 0 containers: []
	W0420 01:27:34.088352  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:34.088364  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:34.088381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:34.180338  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:34.180380  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:34.224386  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:34.224422  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:34.278451  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:34.278488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:34.294377  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:34.294409  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:34.377115  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:36.878000  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:36.896875  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:36.896953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:36.953915  142411 cri.go:89] found id: ""
	I0420 01:27:36.953954  142411 logs.go:276] 0 containers: []
	W0420 01:27:36.953968  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:36.953977  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:36.954056  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:36.998223  142411 cri.go:89] found id: ""
	I0420 01:27:36.998250  142411 logs.go:276] 0 containers: []
	W0420 01:27:36.998260  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:36.998268  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:36.998337  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:37.069299  142411 cri.go:89] found id: ""
	I0420 01:27:37.069346  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.069358  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:37.069366  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:37.069436  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:37.112068  142411 cri.go:89] found id: ""
	I0420 01:27:37.112100  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.112112  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:37.112119  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:37.112175  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:37.155883  142411 cri.go:89] found id: ""
	I0420 01:27:37.155913  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.155924  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:37.155933  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:37.156006  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:37.200979  142411 cri.go:89] found id: ""
	I0420 01:27:37.201007  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.201018  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:37.201026  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:37.201091  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:37.241639  142411 cri.go:89] found id: ""
	I0420 01:27:37.241667  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.241678  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:37.241686  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:37.241748  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:37.281845  142411 cri.go:89] found id: ""
	I0420 01:27:37.281883  142411 logs.go:276] 0 containers: []
	W0420 01:27:37.281894  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:37.281907  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:37.281923  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:37.327428  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:37.327463  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:37.385213  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:37.385248  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:37.400158  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:37.400190  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:37.476662  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:37.476687  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:37.476700  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:37.090819  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0420 01:27:37.090858  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0420 01:27:37.090877  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:37.124020  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:37.124076  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:37.538389  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:37.550894  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:37.550930  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:38.038486  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:38.051983  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:27:38.052019  141746 api_server.go:103] status: https://192.168.72.89:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:27:38.538297  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:27:38.544961  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 200:
	ok
	I0420 01:27:38.553038  141746 api_server.go:141] control plane version: v1.30.0
	I0420 01:27:38.553065  141746 api_server.go:131] duration metric: took 41.015012791s to wait for apiserver health ...
	I0420 01:27:38.553075  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:27:38.553081  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:27:38.554687  141746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:27:35.684396  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:38.183391  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:38.555934  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:27:38.575384  141746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:27:38.609934  141746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:27:38.637152  141746 system_pods.go:59] 8 kube-system pods found
	I0420 01:27:38.637184  141746 system_pods.go:61] "coredns-7db6d8ff4d-r2hs7" [981840a2-82cd-49e0-8d4f-fbaf05290668] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:27:38.637191  141746 system_pods.go:61] "etcd-no-preload-338118" [92fc0da4-63d3-4f34-a5a6-27b73e7e210d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0420 01:27:38.637198  141746 system_pods.go:61] "kube-apiserver-no-preload-338118" [9f7bd5df-f733-4944-9ad2-0c9f0ea4529b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:27:38.637206  141746 system_pods.go:61] "kube-controller-manager-no-preload-338118" [d7a0bd6a-2cd0-4b27-ae83-ae38c1a20c63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:27:38.637215  141746 system_pods.go:61] "kube-proxy-zgq86" [d379ae65-c579-47e4-b055-6512e74868a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0420 01:27:38.637219  141746 system_pods.go:61] "kube-scheduler-no-preload-338118" [99558213-289d-4682-ba8e-20175c815563] Running
	I0420 01:27:38.637225  141746 system_pods.go:61] "metrics-server-569cc877fc-lcbcz" [1d2b716a-555a-46aa-ae27-c40553c94288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:27:38.637229  141746 system_pods.go:61] "storage-provisioner" [a8316010-8689-42aa-9741-227bf55a16bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:27:38.637236  141746 system_pods.go:74] duration metric: took 27.280844ms to wait for pod list to return data ...
	I0420 01:27:38.637243  141746 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:27:38.640744  141746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:27:38.640774  141746 node_conditions.go:123] node cpu capacity is 2
	I0420 01:27:38.640791  141746 node_conditions.go:105] duration metric: took 3.542872ms to run NodePressure ...
	I0420 01:27:38.640813  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0420 01:27:38.979785  141746 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0420 01:27:38.987541  141746 kubeadm.go:733] kubelet initialised
	I0420 01:27:38.987570  141746 kubeadm.go:734] duration metric: took 7.752383ms waiting for restarted kubelet to initialise ...
	I0420 01:27:38.987582  141746 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:27:38.994929  141746 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:38.999872  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:38.999903  141746 pod_ready.go:81] duration metric: took 4.940439ms for pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:38.999915  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "coredns-7db6d8ff4d-r2hs7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:38.999923  141746 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.004575  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "etcd-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.004595  141746 pod_ready.go:81] duration metric: took 4.662163ms for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.004603  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "etcd-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.004608  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.012365  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "kube-apiserver-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.012386  141746 pod_ready.go:81] duration metric: took 7.773001ms for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.012393  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "kube-apiserver-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.012400  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:39.019091  141746 pod_ready.go:97] node "no-preload-338118" hosting pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.019125  141746 pod_ready.go:81] duration metric: took 6.70398ms for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	E0420 01:27:39.019137  141746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-338118" hosting pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-338118" has status "Ready":"False"
	I0420 01:27:39.019146  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zgq86" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:37.894228  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:39.899004  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:40.075888  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:40.091313  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:40.091389  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:40.134013  142411 cri.go:89] found id: ""
	I0420 01:27:40.134039  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.134048  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:40.134053  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:40.134136  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:40.182108  142411 cri.go:89] found id: ""
	I0420 01:27:40.182140  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.182151  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:40.182158  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:40.182222  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:40.225406  142411 cri.go:89] found id: ""
	I0420 01:27:40.225438  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.225447  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:40.225453  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:40.225539  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:40.267599  142411 cri.go:89] found id: ""
	I0420 01:27:40.267627  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.267636  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:40.267645  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:40.267790  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:40.309385  142411 cri.go:89] found id: ""
	I0420 01:27:40.309418  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.309439  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:40.309448  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:40.309525  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:40.351947  142411 cri.go:89] found id: ""
	I0420 01:27:40.351980  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.351993  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:40.352003  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:40.352079  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:40.395583  142411 cri.go:89] found id: ""
	I0420 01:27:40.395614  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.395623  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:40.395629  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:40.395692  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:40.441348  142411 cri.go:89] found id: ""
	I0420 01:27:40.441397  142411 logs.go:276] 0 containers: []
	W0420 01:27:40.441412  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:40.441426  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:40.441445  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:40.498231  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:40.498268  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:40.514550  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:40.514578  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:40.593580  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:40.593614  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:40.593631  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:40.671736  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:40.671778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:43.224892  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:43.240876  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:43.240939  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:43.281583  142411 cri.go:89] found id: ""
	I0420 01:27:43.281621  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.281634  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:43.281643  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:43.281705  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:43.321079  142411 cri.go:89] found id: ""
	I0420 01:27:43.321115  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.321125  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:43.321132  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:43.321277  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:43.365827  142411 cri.go:89] found id: ""
	I0420 01:27:43.365855  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.365864  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:43.365870  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:43.365921  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:40.184872  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:42.683826  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:41.025729  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:43.025868  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:45.526436  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:42.393681  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:44.401124  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:43.404317  142411 cri.go:89] found id: ""
	I0420 01:27:43.404349  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.404361  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:43.404370  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:43.404443  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:43.449268  142411 cri.go:89] found id: ""
	I0420 01:27:43.449299  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.449323  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:43.449331  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:43.449408  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:43.487782  142411 cri.go:89] found id: ""
	I0420 01:27:43.487829  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.487837  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:43.487844  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:43.487909  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:43.526650  142411 cri.go:89] found id: ""
	I0420 01:27:43.526677  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.526688  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:43.526695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:43.526755  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:43.565288  142411 cri.go:89] found id: ""
	I0420 01:27:43.565328  142411 logs.go:276] 0 containers: []
	W0420 01:27:43.565340  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:43.565352  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:43.565368  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:43.618013  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:43.618046  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:43.634064  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:43.634101  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:43.710633  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:43.710663  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:43.710679  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:43.796658  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:43.796709  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:46.352329  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:46.366848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:46.366935  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:46.413643  142411 cri.go:89] found id: ""
	I0420 01:27:46.413676  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.413687  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:46.413695  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:46.413762  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:46.457976  142411 cri.go:89] found id: ""
	I0420 01:27:46.458002  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.458011  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:46.458020  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:46.458086  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:46.500291  142411 cri.go:89] found id: ""
	I0420 01:27:46.500317  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.500328  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:46.500334  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:46.500398  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:46.541279  142411 cri.go:89] found id: ""
	I0420 01:27:46.541331  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.541343  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:46.541359  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:46.541442  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:46.585613  142411 cri.go:89] found id: ""
	I0420 01:27:46.585642  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.585654  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:46.585661  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:46.585726  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:46.634400  142411 cri.go:89] found id: ""
	I0420 01:27:46.634430  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.634441  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:46.634450  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:46.634534  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:46.676276  142411 cri.go:89] found id: ""
	I0420 01:27:46.676305  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.676313  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:46.676320  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:46.676380  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:46.719323  142411 cri.go:89] found id: ""
	I0420 01:27:46.719356  142411 logs.go:276] 0 containers: []
	W0420 01:27:46.719369  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:46.719381  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:46.719398  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:46.799735  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:46.799765  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:46.799790  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:46.878323  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:46.878371  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:46.931870  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:46.931902  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:46.983217  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:46.983250  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:45.182485  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:47.183499  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:47.526708  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:50.034262  141746 pod_ready.go:102] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:46.897249  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:49.393599  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:49.500147  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:49.517380  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:49.517461  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:49.561300  142411 cri.go:89] found id: ""
	I0420 01:27:49.561347  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.561358  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:49.561365  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:49.561432  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:49.604569  142411 cri.go:89] found id: ""
	I0420 01:27:49.604594  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.604608  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:49.604614  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:49.604664  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:49.644952  142411 cri.go:89] found id: ""
	I0420 01:27:49.644983  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.644999  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:49.645006  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:49.645071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:49.694719  142411 cri.go:89] found id: ""
	I0420 01:27:49.694749  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.694757  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:49.694764  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:49.694815  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:49.743821  142411 cri.go:89] found id: ""
	I0420 01:27:49.743849  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.743857  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:49.743865  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:49.743936  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:49.789125  142411 cri.go:89] found id: ""
	I0420 01:27:49.789152  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.789161  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:49.789167  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:49.789233  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:49.828794  142411 cri.go:89] found id: ""
	I0420 01:27:49.828829  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.828841  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:49.828848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:49.828913  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:49.873335  142411 cri.go:89] found id: ""
	I0420 01:27:49.873366  142411 logs.go:276] 0 containers: []
	W0420 01:27:49.873375  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:49.873385  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:49.873397  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:49.930590  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:49.930632  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:49.946850  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:49.946889  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:50.039200  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:50.039220  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:50.039236  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:50.122067  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:50.122118  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:52.664342  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:52.682978  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:52.683061  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:52.733806  142411 cri.go:89] found id: ""
	I0420 01:27:52.733836  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.733848  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:52.733855  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:52.733921  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:52.785977  142411 cri.go:89] found id: ""
	I0420 01:27:52.786008  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.786020  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:52.786027  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:52.786092  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:52.826957  142411 cri.go:89] found id: ""
	I0420 01:27:52.826987  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.826995  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:52.827001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:52.827056  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:52.876208  142411 cri.go:89] found id: ""
	I0420 01:27:52.876251  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.876265  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:52.876276  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:52.876354  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:52.918629  142411 cri.go:89] found id: ""
	I0420 01:27:52.918666  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.918679  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:52.918687  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:52.918767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:52.967604  142411 cri.go:89] found id: ""
	I0420 01:27:52.967646  142411 logs.go:276] 0 containers: []
	W0420 01:27:52.967655  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:52.967661  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:52.967729  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:53.010948  142411 cri.go:89] found id: ""
	I0420 01:27:53.010975  142411 logs.go:276] 0 containers: []
	W0420 01:27:53.010983  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:53.010988  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:53.011039  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:53.055569  142411 cri.go:89] found id: ""
	I0420 01:27:53.055594  142411 logs.go:276] 0 containers: []
	W0420 01:27:53.055611  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:53.055620  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:53.055633  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:53.071038  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:53.071067  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:53.151334  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:53.151364  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:53.151381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:53.238509  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:53.238553  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:53.284898  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:53.284945  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:49.183562  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.682524  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:53.684003  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.027739  141746 pod_ready.go:92] pod "kube-proxy-zgq86" in "kube-system" namespace has status "Ready":"True"
	I0420 01:27:51.027773  141746 pod_ready.go:81] duration metric: took 12.008613872s for pod "kube-proxy-zgq86" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.027785  141746 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.033100  141746 pod_ready.go:92] pod "kube-scheduler-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:27:51.033124  141746 pod_ready.go:81] duration metric: took 5.331694ms for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:51.033136  141746 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" ...
	I0420 01:27:53.041387  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:55.542345  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:51.896822  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:54.395015  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:55.843065  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:55.856928  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:55.857001  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:55.903058  142411 cri.go:89] found id: ""
	I0420 01:27:55.903092  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.903103  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:55.903111  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:55.903170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:55.944369  142411 cri.go:89] found id: ""
	I0420 01:27:55.944402  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.944414  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:55.944421  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:55.944474  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:55.983485  142411 cri.go:89] found id: ""
	I0420 01:27:55.983510  142411 logs.go:276] 0 containers: []
	W0420 01:27:55.983517  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:55.983523  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:55.983571  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:56.021931  142411 cri.go:89] found id: ""
	I0420 01:27:56.021956  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.021964  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:56.021970  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:56.022019  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:56.066671  142411 cri.go:89] found id: ""
	I0420 01:27:56.066705  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.066717  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:56.066724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:56.066788  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:56.107724  142411 cri.go:89] found id: ""
	I0420 01:27:56.107783  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.107794  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:56.107800  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:56.107854  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:56.149201  142411 cri.go:89] found id: ""
	I0420 01:27:56.149234  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.149246  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:56.149255  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:56.149328  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:56.189580  142411 cri.go:89] found id: ""
	I0420 01:27:56.189621  142411 logs.go:276] 0 containers: []
	W0420 01:27:56.189633  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:56.189645  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:56.189661  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:56.243425  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:56.243462  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:56.261043  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:56.261079  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:56.341944  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:56.341967  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:56.341980  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:56.423252  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:56.423294  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:27:55.684408  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.183545  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:57.542492  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:00.040617  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:56.892991  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.893124  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:00.893660  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:27:58.968894  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:27:58.984559  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:27:58.984648  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:27:59.021603  142411 cri.go:89] found id: ""
	I0420 01:27:59.021634  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.021655  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:27:59.021666  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:27:59.021756  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:27:59.061592  142411 cri.go:89] found id: ""
	I0420 01:27:59.061626  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.061642  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:27:59.061649  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:27:59.061701  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:27:59.101956  142411 cri.go:89] found id: ""
	I0420 01:27:59.101986  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.101996  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:27:59.102003  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:27:59.102072  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:27:59.141104  142411 cri.go:89] found id: ""
	I0420 01:27:59.141136  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.141145  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:27:59.141151  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:27:59.141221  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:27:59.188973  142411 cri.go:89] found id: ""
	I0420 01:27:59.189005  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.189014  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:27:59.189022  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:27:59.189107  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:27:59.232598  142411 cri.go:89] found id: ""
	I0420 01:27:59.232632  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.232641  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:27:59.232647  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:27:59.232704  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:27:59.272623  142411 cri.go:89] found id: ""
	I0420 01:27:59.272660  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.272669  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:27:59.272675  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:27:59.272739  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:27:59.309951  142411 cri.go:89] found id: ""
	I0420 01:27:59.309977  142411 logs.go:276] 0 containers: []
	W0420 01:27:59.309984  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:27:59.309994  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:27:59.310005  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:27:59.366589  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:27:59.366626  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:27:59.382724  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:27:59.382756  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:27:59.461072  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:27:59.461102  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:27:59.461122  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:27:59.544736  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:27:59.544769  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:02.089118  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:02.105402  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:02.105483  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:02.144665  142411 cri.go:89] found id: ""
	I0420 01:28:02.144691  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.144700  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:02.144706  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:02.144759  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:02.187471  142411 cri.go:89] found id: ""
	I0420 01:28:02.187498  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.187508  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:02.187515  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:02.187576  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:02.229206  142411 cri.go:89] found id: ""
	I0420 01:28:02.229233  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.229241  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:02.229247  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:02.229335  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:02.279425  142411 cri.go:89] found id: ""
	I0420 01:28:02.279464  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.279478  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:02.279488  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:02.279577  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:02.323033  142411 cri.go:89] found id: ""
	I0420 01:28:02.323066  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.323082  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:02.323090  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:02.323155  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:02.360121  142411 cri.go:89] found id: ""
	I0420 01:28:02.360158  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.360170  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:02.360178  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:02.360244  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:02.398756  142411 cri.go:89] found id: ""
	I0420 01:28:02.398786  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.398797  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:02.398804  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:02.398867  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:02.437982  142411 cri.go:89] found id: ""
	I0420 01:28:02.438010  142411 logs.go:276] 0 containers: []
	W0420 01:28:02.438018  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:02.438028  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:02.438041  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:02.489396  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:02.489434  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:02.506764  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:02.506796  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:02.591894  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:02.591915  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:02.591929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:02.675241  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:02.675281  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:00.683139  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:02.684787  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:02.540829  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.041823  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:03.393076  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.396351  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:05.224296  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:05.238522  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:05.238593  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:05.278495  142411 cri.go:89] found id: ""
	I0420 01:28:05.278529  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.278540  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:05.278549  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:05.278621  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:05.318096  142411 cri.go:89] found id: ""
	I0420 01:28:05.318122  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.318130  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:05.318136  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:05.318196  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:05.358607  142411 cri.go:89] found id: ""
	I0420 01:28:05.358636  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.358653  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:05.358658  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:05.358749  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:05.417163  142411 cri.go:89] found id: ""
	I0420 01:28:05.417199  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.417211  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:05.417218  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:05.417284  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:05.468566  142411 cri.go:89] found id: ""
	I0420 01:28:05.468599  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.468610  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:05.468619  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:05.468691  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:05.514005  142411 cri.go:89] found id: ""
	I0420 01:28:05.514037  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.514047  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:05.514055  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:05.514112  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:05.554972  142411 cri.go:89] found id: ""
	I0420 01:28:05.555001  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.555012  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:05.555020  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:05.555083  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:05.596736  142411 cri.go:89] found id: ""
	I0420 01:28:05.596764  142411 logs.go:276] 0 containers: []
	W0420 01:28:05.596773  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:05.596787  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:05.596800  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:05.649680  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:05.649719  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:05.667583  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:05.667614  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:05.743886  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:05.743922  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:05.743939  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:05.827827  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:05.827863  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:08.384615  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:05.181917  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.182902  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.541045  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:09.542114  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:07.892610  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:10.392899  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:08.401190  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:08.403071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:08.445453  142411 cri.go:89] found id: ""
	I0420 01:28:08.445486  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.445497  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:08.445505  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:08.445573  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:08.487598  142411 cri.go:89] found id: ""
	I0420 01:28:08.487636  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.487649  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:08.487657  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:08.487727  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:08.531416  142411 cri.go:89] found id: ""
	I0420 01:28:08.531445  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.531457  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:08.531465  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:08.531526  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:08.574964  142411 cri.go:89] found id: ""
	I0420 01:28:08.575000  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.575012  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:08.575020  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:08.575075  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:08.612644  142411 cri.go:89] found id: ""
	I0420 01:28:08.612679  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.612688  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:08.612695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:08.612748  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:08.651775  142411 cri.go:89] found id: ""
	I0420 01:28:08.651800  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.651811  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:08.651817  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:08.651869  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:08.692869  142411 cri.go:89] found id: ""
	I0420 01:28:08.692894  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.692902  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:08.692908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:08.692957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:08.731765  142411 cri.go:89] found id: ""
	I0420 01:28:08.731794  142411 logs.go:276] 0 containers: []
	W0420 01:28:08.731805  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:08.731817  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:08.731836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:08.747401  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:08.747445  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:08.831069  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:08.831091  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:08.831110  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:08.919053  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:08.919095  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:08.965814  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:08.965854  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:11.518303  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:11.535213  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:11.535294  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:11.577182  142411 cri.go:89] found id: ""
	I0420 01:28:11.577214  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.577223  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:11.577229  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:11.577289  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:11.615023  142411 cri.go:89] found id: ""
	I0420 01:28:11.615055  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.615064  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:11.615070  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:11.615138  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:11.654062  142411 cri.go:89] found id: ""
	I0420 01:28:11.654089  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.654097  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:11.654104  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:11.654170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:11.700846  142411 cri.go:89] found id: ""
	I0420 01:28:11.700875  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.700885  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:11.700892  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:11.700966  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:11.743061  142411 cri.go:89] found id: ""
	I0420 01:28:11.743089  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.743100  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:11.743109  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:11.743175  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:11.783651  142411 cri.go:89] found id: ""
	I0420 01:28:11.783687  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.783698  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:11.783706  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:11.783781  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:11.827099  142411 cri.go:89] found id: ""
	I0420 01:28:11.827130  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.827139  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:11.827144  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:11.827197  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:11.867476  142411 cri.go:89] found id: ""
	I0420 01:28:11.867510  142411 logs.go:276] 0 containers: []
	W0420 01:28:11.867523  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:11.867535  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:11.867554  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:11.920211  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:11.920246  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:11.937632  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:11.937670  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:12.014917  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:12.014940  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:12.014955  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:12.096549  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:12.096586  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:09.684447  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.183063  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.041220  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:14.540620  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:12.893441  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:15.408953  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:14.653783  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:14.667893  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:14.667955  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:14.710098  142411 cri.go:89] found id: ""
	I0420 01:28:14.710153  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.710164  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:14.710172  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:14.710240  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:14.750891  142411 cri.go:89] found id: ""
	I0420 01:28:14.750920  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.750929  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:14.750939  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:14.751010  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:14.794062  142411 cri.go:89] found id: ""
	I0420 01:28:14.794103  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.794127  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:14.794135  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:14.794204  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:14.834333  142411 cri.go:89] found id: ""
	I0420 01:28:14.834363  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.834375  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:14.834383  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:14.834446  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:14.874114  142411 cri.go:89] found id: ""
	I0420 01:28:14.874148  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.874160  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:14.874168  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:14.874238  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:14.912685  142411 cri.go:89] found id: ""
	I0420 01:28:14.912711  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.912720  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:14.912726  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:14.912787  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:14.954050  142411 cri.go:89] found id: ""
	I0420 01:28:14.954076  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.954083  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:14.954089  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:14.954150  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:14.992310  142411 cri.go:89] found id: ""
	I0420 01:28:14.992348  142411 logs.go:276] 0 containers: []
	W0420 01:28:14.992357  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:14.992365  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:14.992388  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:15.047471  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:15.047512  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:15.065800  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:15.065842  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:15.146009  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:15.146037  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:15.146058  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:15.232920  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:15.232962  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:17.781215  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:17.797404  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:17.797466  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:17.840532  142411 cri.go:89] found id: ""
	I0420 01:28:17.840564  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.840573  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:17.840579  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:17.840636  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:17.881562  142411 cri.go:89] found id: ""
	I0420 01:28:17.881588  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.881596  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:17.881602  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:17.881651  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:17.935068  142411 cri.go:89] found id: ""
	I0420 01:28:17.935098  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.935108  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:17.935115  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:17.935177  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:17.980745  142411 cri.go:89] found id: ""
	I0420 01:28:17.980782  142411 logs.go:276] 0 containers: []
	W0420 01:28:17.980795  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:17.980804  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:17.980880  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:18.051120  142411 cri.go:89] found id: ""
	I0420 01:28:18.051153  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.051164  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:18.051171  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:18.051235  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:18.091741  142411 cri.go:89] found id: ""
	I0420 01:28:18.091776  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.091788  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:18.091796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:18.091864  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:18.133438  142411 cri.go:89] found id: ""
	I0420 01:28:18.133472  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.133482  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:18.133488  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:18.133560  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:18.174624  142411 cri.go:89] found id: ""
	I0420 01:28:18.174665  142411 logs.go:276] 0 containers: []
	W0420 01:28:18.174679  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:18.174694  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:18.174713  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:18.228519  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:18.228563  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:18.246452  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:18.246487  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:18.322051  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:18.322074  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:18.322088  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:14.684817  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:17.182405  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:16.541139  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:19.041191  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:17.895052  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:19.895901  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:18.404873  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:18.404904  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:20.950553  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:20.965081  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:20.965139  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:21.007198  142411 cri.go:89] found id: ""
	I0420 01:28:21.007243  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.007255  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:21.007263  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:21.007330  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:21.050991  142411 cri.go:89] found id: ""
	I0420 01:28:21.051019  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.051028  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:21.051034  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:21.051104  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:21.091953  142411 cri.go:89] found id: ""
	I0420 01:28:21.091986  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.091995  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:21.092001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:21.092085  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:21.134134  142411 cri.go:89] found id: ""
	I0420 01:28:21.134164  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.134174  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:21.134181  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:21.134251  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:21.173698  142411 cri.go:89] found id: ""
	I0420 01:28:21.173724  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.173731  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:21.173737  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:21.173801  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:21.221327  142411 cri.go:89] found id: ""
	I0420 01:28:21.221354  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.221362  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:21.221369  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:21.221428  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:21.262752  142411 cri.go:89] found id: ""
	I0420 01:28:21.262780  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.262791  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:21.262798  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:21.262851  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:21.303497  142411 cri.go:89] found id: ""
	I0420 01:28:21.303524  142411 logs.go:276] 0 containers: []
	W0420 01:28:21.303535  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:21.303547  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:21.303563  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:21.358231  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:21.358265  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:21.373723  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:21.373753  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:21.465016  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:21.465044  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:21.465061  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:21.552087  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:21.552117  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:19.683617  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:22.182720  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:21.540588  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.039211  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:22.393170  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.396378  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:24.099938  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:24.116967  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:24.117045  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:24.159458  142411 cri.go:89] found id: ""
	I0420 01:28:24.159491  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.159501  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:24.159508  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:24.159574  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:24.206028  142411 cri.go:89] found id: ""
	I0420 01:28:24.206054  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.206065  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:24.206072  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:24.206137  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:24.248047  142411 cri.go:89] found id: ""
	I0420 01:28:24.248088  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.248101  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:24.248109  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:24.248176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:24.287867  142411 cri.go:89] found id: ""
	I0420 01:28:24.287898  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.287909  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:24.287917  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:24.287995  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:24.329399  142411 cri.go:89] found id: ""
	I0420 01:28:24.329433  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.329444  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:24.329452  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:24.329519  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:24.367846  142411 cri.go:89] found id: ""
	I0420 01:28:24.367871  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.367882  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:24.367889  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:24.367960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:24.414245  142411 cri.go:89] found id: ""
	I0420 01:28:24.414272  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.414283  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:24.414291  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:24.414354  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:24.453268  142411 cri.go:89] found id: ""
	I0420 01:28:24.453302  142411 logs.go:276] 0 containers: []
	W0420 01:28:24.453331  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:24.453344  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:24.453366  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:24.514501  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:24.514546  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:24.529551  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:24.529591  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:24.613734  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:24.613757  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:24.613775  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:24.693804  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:24.693843  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:27.238443  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:27.254172  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:27.254235  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:27.297048  142411 cri.go:89] found id: ""
	I0420 01:28:27.297101  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.297111  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:27.297119  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:27.297181  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:27.340145  142411 cri.go:89] found id: ""
	I0420 01:28:27.340171  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.340181  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:27.340189  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:27.340316  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:27.383047  142411 cri.go:89] found id: ""
	I0420 01:28:27.383077  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.383089  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:27.383096  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:27.383169  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:27.428088  142411 cri.go:89] found id: ""
	I0420 01:28:27.428122  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.428134  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:27.428142  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:27.428206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:27.468257  142411 cri.go:89] found id: ""
	I0420 01:28:27.468300  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.468310  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:27.468317  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:27.468389  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:27.508834  142411 cri.go:89] found id: ""
	I0420 01:28:27.508873  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.508885  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:27.508892  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:27.508953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:27.548853  142411 cri.go:89] found id: ""
	I0420 01:28:27.548893  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.548901  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:27.548908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:27.548956  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:27.587841  142411 cri.go:89] found id: ""
	I0420 01:28:27.587875  142411 logs.go:276] 0 containers: []
	W0420 01:28:27.587886  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:27.587899  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:27.587917  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:27.667848  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:27.667888  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:27.714820  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:27.714856  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:27.766337  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:27.766381  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:27.782585  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:27.782627  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:27.856172  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:24.184768  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.683097  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.040531  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:28.040802  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:30.542386  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:26.893091  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:29.393546  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:30.356809  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:30.372449  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:30.372529  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:30.422164  142411 cri.go:89] found id: ""
	I0420 01:28:30.422198  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.422209  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:30.422218  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:30.422283  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:30.460367  142411 cri.go:89] found id: ""
	I0420 01:28:30.460395  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.460404  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:30.460411  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:30.460498  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:30.508423  142411 cri.go:89] found id: ""
	I0420 01:28:30.508460  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.508471  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:30.508479  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:30.508546  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:30.553124  142411 cri.go:89] found id: ""
	I0420 01:28:30.553152  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.553161  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:30.553167  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:30.553225  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:30.601866  142411 cri.go:89] found id: ""
	I0420 01:28:30.601908  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.601919  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:30.601939  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:30.602014  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:30.645413  142411 cri.go:89] found id: ""
	I0420 01:28:30.645446  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.645457  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:30.645467  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:30.645539  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:30.690955  142411 cri.go:89] found id: ""
	I0420 01:28:30.690988  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.690997  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:30.691006  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:30.691077  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:30.732146  142411 cri.go:89] found id: ""
	I0420 01:28:30.732186  142411 logs.go:276] 0 containers: []
	W0420 01:28:30.732197  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:30.732209  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:30.732228  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:30.786890  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:30.786928  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:30.802887  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:30.802920  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:30.884422  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:30.884447  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:30.884461  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:30.967504  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:30.967540  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:29.183645  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:31.683218  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.684335  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.044031  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:35.540100  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:31.897363  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:34.392658  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:33.515720  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:33.531895  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:33.531953  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:33.574626  142411 cri.go:89] found id: ""
	I0420 01:28:33.574668  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.574682  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:33.574690  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:33.574757  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:33.620527  142411 cri.go:89] found id: ""
	I0420 01:28:33.620553  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.620562  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:33.620568  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:33.620630  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:33.659685  142411 cri.go:89] found id: ""
	I0420 01:28:33.659711  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.659719  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:33.659724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:33.659773  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:33.699390  142411 cri.go:89] found id: ""
	I0420 01:28:33.699414  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.699422  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:33.699427  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:33.699485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:33.743819  142411 cri.go:89] found id: ""
	I0420 01:28:33.743844  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.743852  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:33.743858  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:33.743907  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:33.788416  142411 cri.go:89] found id: ""
	I0420 01:28:33.788442  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.788450  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:33.788456  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:33.788514  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:33.834105  142411 cri.go:89] found id: ""
	I0420 01:28:33.834129  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.834138  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:33.834144  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:33.834206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:33.884118  142411 cri.go:89] found id: ""
	I0420 01:28:33.884152  142411 logs.go:276] 0 containers: []
	W0420 01:28:33.884164  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:33.884176  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:33.884193  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:33.940493  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:33.940525  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:33.954800  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:33.954829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:34.030788  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:34.030812  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:34.030829  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:34.119533  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:34.119574  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:36.667132  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:36.684253  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:36.684334  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:36.723598  142411 cri.go:89] found id: ""
	I0420 01:28:36.723629  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.723641  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:36.723649  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:36.723718  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:36.761563  142411 cri.go:89] found id: ""
	I0420 01:28:36.761594  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.761606  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:36.761614  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:36.761679  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:36.803553  142411 cri.go:89] found id: ""
	I0420 01:28:36.803590  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.803603  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:36.803611  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:36.803674  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:36.840368  142411 cri.go:89] found id: ""
	I0420 01:28:36.840407  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.840421  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:36.840430  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:36.840497  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:36.879689  142411 cri.go:89] found id: ""
	I0420 01:28:36.879724  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.879735  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:36.879743  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:36.879807  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:36.920757  142411 cri.go:89] found id: ""
	I0420 01:28:36.920785  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.920796  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:36.920809  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:36.920871  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:36.957522  142411 cri.go:89] found id: ""
	I0420 01:28:36.957548  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.957556  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:36.957562  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:36.957624  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:36.997358  142411 cri.go:89] found id: ""
	I0420 01:28:36.997390  142411 logs.go:276] 0 containers: []
	W0420 01:28:36.997400  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:36.997409  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:36.997422  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:37.055063  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:37.055105  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:37.070691  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:37.070720  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:37.150114  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:37.150140  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:37.150152  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:37.228676  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:37.228711  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:36.182514  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.183398  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.040622  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:40.539486  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:36.395217  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:38.893457  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:40.894381  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:39.776620  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:39.792201  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:39.792268  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:39.831544  142411 cri.go:89] found id: ""
	I0420 01:28:39.831568  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.831576  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:39.831588  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:39.831652  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:39.869458  142411 cri.go:89] found id: ""
	I0420 01:28:39.869488  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.869496  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:39.869503  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:39.869564  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:39.911588  142411 cri.go:89] found id: ""
	I0420 01:28:39.911615  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.911626  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:39.911633  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:39.911703  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:39.952458  142411 cri.go:89] found id: ""
	I0420 01:28:39.952489  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.952505  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:39.952513  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:39.952580  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:39.992988  142411 cri.go:89] found id: ""
	I0420 01:28:39.993016  142411 logs.go:276] 0 containers: []
	W0420 01:28:39.993023  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:39.993029  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:39.993117  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:40.038306  142411 cri.go:89] found id: ""
	I0420 01:28:40.038348  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.038359  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:40.038367  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:40.038432  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:40.082185  142411 cri.go:89] found id: ""
	I0420 01:28:40.082219  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.082230  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:40.082238  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:40.082332  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:40.120346  142411 cri.go:89] found id: ""
	I0420 01:28:40.120373  142411 logs.go:276] 0 containers: []
	W0420 01:28:40.120382  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:40.120391  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:40.120405  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:40.173735  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:40.173769  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:40.191808  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:40.191844  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:40.271429  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:40.271456  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:40.271473  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:40.361519  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:40.361558  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:42.938354  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:42.953088  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:42.953167  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:42.992539  142411 cri.go:89] found id: ""
	I0420 01:28:42.992564  142411 logs.go:276] 0 containers: []
	W0420 01:28:42.992571  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:42.992577  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:42.992637  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:43.032017  142411 cri.go:89] found id: ""
	I0420 01:28:43.032059  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.032074  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:43.032082  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:43.032142  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:43.077229  142411 cri.go:89] found id: ""
	I0420 01:28:43.077258  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.077266  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:43.077272  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:43.077342  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:43.117107  142411 cri.go:89] found id: ""
	I0420 01:28:43.117128  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.117139  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:43.117145  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:43.117206  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:43.156262  142411 cri.go:89] found id: ""
	I0420 01:28:43.156297  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.156310  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:43.156317  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:43.156384  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:43.195897  142411 cri.go:89] found id: ""
	I0420 01:28:43.195927  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.195935  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:43.195942  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:43.195990  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:43.230468  142411 cri.go:89] found id: ""
	I0420 01:28:43.230498  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.230513  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:43.230522  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:43.230586  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:43.271980  142411 cri.go:89] found id: ""
	I0420 01:28:43.272009  142411 logs.go:276] 0 containers: []
	W0420 01:28:43.272023  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:43.272035  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:43.272050  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:43.331606  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:43.331641  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:43.348411  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:43.348437  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 01:28:40.682973  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:43.182655  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:42.540341  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:45.039729  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:43.393377  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:45.893276  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	W0420 01:28:43.428628  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:43.428654  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:43.428675  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:43.511471  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:43.511506  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:46.056166  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:46.071677  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:46.071744  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:46.110710  142411 cri.go:89] found id: ""
	I0420 01:28:46.110740  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.110753  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:46.110761  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:46.110825  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:46.170680  142411 cri.go:89] found id: ""
	I0420 01:28:46.170712  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.170724  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:46.170731  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:46.170794  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:46.216387  142411 cri.go:89] found id: ""
	I0420 01:28:46.216413  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.216421  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:46.216429  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:46.216485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:46.258641  142411 cri.go:89] found id: ""
	I0420 01:28:46.258674  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.258685  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:46.258694  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:46.258755  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:46.296359  142411 cri.go:89] found id: ""
	I0420 01:28:46.296395  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.296407  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:46.296416  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:46.296480  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:46.335194  142411 cri.go:89] found id: ""
	I0420 01:28:46.335223  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.335238  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:46.335247  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:46.335300  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:46.373748  142411 cri.go:89] found id: ""
	I0420 01:28:46.373777  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.373789  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:46.373796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:46.373860  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:46.416960  142411 cri.go:89] found id: ""
	I0420 01:28:46.416987  142411 logs.go:276] 0 containers: []
	W0420 01:28:46.416995  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:46.417005  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:46.417017  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:46.497542  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:46.497582  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:46.548086  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:46.548136  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:46.607354  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:46.607390  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:46.624379  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:46.624415  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:46.707425  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:45.682511  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.682752  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.046102  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:49.540014  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:47.895805  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:50.393001  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:49.208459  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:49.223081  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:49.223146  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:49.258688  142411 cri.go:89] found id: ""
	I0420 01:28:49.258718  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.258728  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:49.258734  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:49.258791  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:49.296817  142411 cri.go:89] found id: ""
	I0420 01:28:49.296859  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.296870  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:49.296878  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:49.296941  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:49.337821  142411 cri.go:89] found id: ""
	I0420 01:28:49.337853  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.337863  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:49.337870  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:49.337940  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:49.381360  142411 cri.go:89] found id: ""
	I0420 01:28:49.381384  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.381392  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:49.381397  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:49.381463  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:49.420099  142411 cri.go:89] found id: ""
	I0420 01:28:49.420143  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.420154  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:49.420162  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:49.420223  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:49.459810  142411 cri.go:89] found id: ""
	I0420 01:28:49.459843  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.459850  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:49.459859  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:49.459911  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:49.499776  142411 cri.go:89] found id: ""
	I0420 01:28:49.499808  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.499820  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:49.499828  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:49.499894  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:49.536115  142411 cri.go:89] found id: ""
	I0420 01:28:49.536147  142411 logs.go:276] 0 containers: []
	W0420 01:28:49.536158  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:49.536169  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:49.536190  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:49.594665  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:49.594701  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:49.611896  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:49.611929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:49.689667  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:49.689685  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:49.689697  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:49.769061  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:49.769106  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:52.319299  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:52.336861  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:52.336934  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:52.380690  142411 cri.go:89] found id: ""
	I0420 01:28:52.380717  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.380725  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:52.380731  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:52.380781  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:52.429798  142411 cri.go:89] found id: ""
	I0420 01:28:52.429831  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.429843  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:52.429851  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:52.429915  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:52.474087  142411 cri.go:89] found id: ""
	I0420 01:28:52.474120  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.474130  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:52.474139  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:52.474204  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:52.514739  142411 cri.go:89] found id: ""
	I0420 01:28:52.514776  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.514789  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:52.514796  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:52.514852  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:52.562100  142411 cri.go:89] found id: ""
	I0420 01:28:52.562195  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.562228  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:52.562236  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:52.562324  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:52.623266  142411 cri.go:89] found id: ""
	I0420 01:28:52.623301  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.623313  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:52.623321  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:52.623386  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:52.667788  142411 cri.go:89] found id: ""
	I0420 01:28:52.667818  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.667828  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:52.667838  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:52.667902  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:52.724607  142411 cri.go:89] found id: ""
	I0420 01:28:52.724636  142411 logs.go:276] 0 containers: []
	W0420 01:28:52.724645  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:52.724654  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:52.724666  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:52.774798  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:52.774836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:52.833949  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:52.833989  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:52.851757  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:52.851787  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:52.939092  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:52.939119  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:52.939136  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:49.684112  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:52.182596  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:51.540918  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:54.039528  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:52.393913  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:54.892043  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:55.525807  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:55.540481  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:55.540557  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:55.584415  142411 cri.go:89] found id: ""
	I0420 01:28:55.584447  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.584458  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:55.584466  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:55.584538  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:55.623920  142411 cri.go:89] found id: ""
	I0420 01:28:55.623955  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.623965  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:55.623973  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:55.624037  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:55.667768  142411 cri.go:89] found id: ""
	I0420 01:28:55.667802  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.667810  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:55.667816  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:55.667889  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:55.708466  142411 cri.go:89] found id: ""
	I0420 01:28:55.708502  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.708513  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:55.708520  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:55.708600  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:55.748797  142411 cri.go:89] found id: ""
	I0420 01:28:55.748838  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.748849  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:55.748857  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:55.748919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:55.791714  142411 cri.go:89] found id: ""
	I0420 01:28:55.791743  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.791752  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:55.791761  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:55.791832  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:55.833836  142411 cri.go:89] found id: ""
	I0420 01:28:55.833862  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.833872  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:55.833879  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:55.833942  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:55.877425  142411 cri.go:89] found id: ""
	I0420 01:28:55.877462  142411 logs.go:276] 0 containers: []
	W0420 01:28:55.877472  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:55.877484  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:55.877501  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:55.933237  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:55.933280  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:55.949507  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:55.949534  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:56.025596  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:56.025624  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:56.025641  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:56.105403  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:56.105439  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:28:54.683664  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.684401  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.040380  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.040834  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:00.040878  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:56.893067  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.894882  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:28:58.653368  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:28:58.669367  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:28:58.669429  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:28:58.712457  142411 cri.go:89] found id: ""
	I0420 01:28:58.712490  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.712501  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:28:58.712508  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:28:58.712574  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:28:58.750246  142411 cri.go:89] found id: ""
	I0420 01:28:58.750273  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.750281  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:28:58.750287  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:28:58.750351  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:28:58.793486  142411 cri.go:89] found id: ""
	I0420 01:28:58.793514  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.793522  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:28:58.793529  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:28:58.793595  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:28:58.839413  142411 cri.go:89] found id: ""
	I0420 01:28:58.839448  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.839461  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:28:58.839469  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:28:58.839537  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:28:58.881385  142411 cri.go:89] found id: ""
	I0420 01:28:58.881418  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.881430  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:28:58.881438  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:28:58.881509  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:28:58.923900  142411 cri.go:89] found id: ""
	I0420 01:28:58.923945  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.923965  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:28:58.923975  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:28:58.924038  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:28:58.962795  142411 cri.go:89] found id: ""
	I0420 01:28:58.962836  142411 logs.go:276] 0 containers: []
	W0420 01:28:58.962848  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:28:58.962856  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:28:58.962919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:28:59.006309  142411 cri.go:89] found id: ""
	I0420 01:28:59.006341  142411 logs.go:276] 0 containers: []
	W0420 01:28:59.006350  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:28:59.006360  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:28:59.006372  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:28:59.062778  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:28:59.062819  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:59.078600  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:28:59.078630  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:28:59.159340  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:28:59.159361  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:28:59.159376  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:28:59.247257  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:28:59.247307  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:01.792687  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:01.808507  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:01.808588  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:01.851642  142411 cri.go:89] found id: ""
	I0420 01:29:01.851680  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.851691  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:01.851699  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:01.851765  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:01.891516  142411 cri.go:89] found id: ""
	I0420 01:29:01.891549  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.891560  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:01.891568  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:01.891640  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:01.934353  142411 cri.go:89] found id: ""
	I0420 01:29:01.934390  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.934402  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:01.934410  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:01.934479  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:01.972552  142411 cri.go:89] found id: ""
	I0420 01:29:01.972587  142411 logs.go:276] 0 containers: []
	W0420 01:29:01.972599  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:01.972607  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:01.972711  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:02.012316  142411 cri.go:89] found id: ""
	I0420 01:29:02.012348  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.012360  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:02.012368  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:02.012423  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:02.056951  142411 cri.go:89] found id: ""
	I0420 01:29:02.056984  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.056994  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:02.057001  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:02.057164  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:02.104061  142411 cri.go:89] found id: ""
	I0420 01:29:02.104091  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.104102  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:02.104110  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:02.104163  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:02.144085  142411 cri.go:89] found id: ""
	I0420 01:29:02.144114  142411 logs.go:276] 0 containers: []
	W0420 01:29:02.144125  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:02.144137  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:02.144160  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:02.216560  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:02.216585  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:02.216598  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:02.307178  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:02.307222  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:02.349769  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:02.349798  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:02.401141  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:02.401176  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:28:59.185384  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:01.684462  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:03.685188  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:02.041060  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:04.540616  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:01.393943  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:03.894095  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:04.917513  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:04.934187  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:04.934266  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:04.970258  142411 cri.go:89] found id: ""
	I0420 01:29:04.970289  142411 logs.go:276] 0 containers: []
	W0420 01:29:04.970298  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:04.970304  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:04.970359  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:05.012853  142411 cri.go:89] found id: ""
	I0420 01:29:05.012883  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.012893  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:05.012899  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:05.012960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:05.054793  142411 cri.go:89] found id: ""
	I0420 01:29:05.054822  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.054833  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:05.054842  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:05.054910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:05.094637  142411 cri.go:89] found id: ""
	I0420 01:29:05.094674  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.094684  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:05.094701  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:05.094770  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:05.134874  142411 cri.go:89] found id: ""
	I0420 01:29:05.134903  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.134912  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:05.134918  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:05.134973  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:05.175637  142411 cri.go:89] found id: ""
	I0420 01:29:05.175668  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.175679  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:05.175687  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:05.175752  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:05.217809  142411 cri.go:89] found id: ""
	I0420 01:29:05.217847  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.217860  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:05.217867  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:05.217933  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:05.266884  142411 cri.go:89] found id: ""
	I0420 01:29:05.266917  142411 logs.go:276] 0 containers: []
	W0420 01:29:05.266930  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:05.266941  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:05.266958  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:05.323765  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:05.323818  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:05.338524  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:05.338553  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:05.419860  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:05.419889  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:05.419906  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:05.506268  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:05.506311  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:08.055690  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:08.072692  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:08.072758  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:08.116247  142411 cri.go:89] found id: ""
	I0420 01:29:08.116287  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.116296  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:08.116304  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:08.116369  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:08.163152  142411 cri.go:89] found id: ""
	I0420 01:29:08.163177  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.163185  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:08.163190  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:08.163246  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:08.207330  142411 cri.go:89] found id: ""
	I0420 01:29:08.207357  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.207365  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:08.207371  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:08.207422  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:08.249833  142411 cri.go:89] found id: ""
	I0420 01:29:08.249864  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.249873  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:08.249879  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:08.249941  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:08.290834  142411 cri.go:89] found id: ""
	I0420 01:29:08.290867  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.290876  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:08.290883  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:08.290957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:08.333767  142411 cri.go:89] found id: ""
	I0420 01:29:08.333799  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.333809  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:08.333816  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:08.333888  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:08.381431  142411 cri.go:89] found id: ""
	I0420 01:29:08.381459  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.381468  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:08.381474  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:08.381532  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:06.183719  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.184829  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:06.544179  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:09.039956  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:06.394434  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.893184  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:10.897462  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:08.423702  142411 cri.go:89] found id: ""
	I0420 01:29:08.423727  142411 logs.go:276] 0 containers: []
	W0420 01:29:08.423739  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:08.423751  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:08.423767  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:08.468422  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:08.468460  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:08.524091  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:08.524125  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:08.540294  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:08.540323  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:08.622439  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:08.622472  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:08.622488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:11.208472  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:11.225412  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:11.225479  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:11.273723  142411 cri.go:89] found id: ""
	I0420 01:29:11.273755  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.273767  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:11.273775  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:11.273840  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:11.316083  142411 cri.go:89] found id: ""
	I0420 01:29:11.316118  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.316130  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:11.316137  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:11.316203  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:11.355632  142411 cri.go:89] found id: ""
	I0420 01:29:11.355659  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.355668  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:11.355674  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:11.355734  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:11.397277  142411 cri.go:89] found id: ""
	I0420 01:29:11.397305  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.397327  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:11.397335  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:11.397399  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:11.439333  142411 cri.go:89] found id: ""
	I0420 01:29:11.439357  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.439366  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:11.439372  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:11.439433  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:11.477044  142411 cri.go:89] found id: ""
	I0420 01:29:11.477072  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.477079  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:11.477086  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:11.477142  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:11.516150  142411 cri.go:89] found id: ""
	I0420 01:29:11.516184  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.516196  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:11.516204  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:11.516274  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:11.557272  142411 cri.go:89] found id: ""
	I0420 01:29:11.557303  142411 logs.go:276] 0 containers: []
	W0420 01:29:11.557331  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:11.557344  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:11.557366  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:11.652272  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:11.652319  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:11.700469  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:11.700504  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:11.756674  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:11.756711  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:11.772377  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:11.772407  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:11.851387  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:10.682669  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:12.684335  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:11.041282  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:13.541986  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:13.393346  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:15.394909  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:14.352257  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:14.367635  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:14.367714  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:14.408757  142411 cri.go:89] found id: ""
	I0420 01:29:14.408779  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.408788  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:14.408794  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:14.408843  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:14.455123  142411 cri.go:89] found id: ""
	I0420 01:29:14.455150  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.455159  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:14.455165  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:14.455239  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:14.499546  142411 cri.go:89] found id: ""
	I0420 01:29:14.499573  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.499581  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:14.499587  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:14.499635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:14.541811  142411 cri.go:89] found id: ""
	I0420 01:29:14.541841  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.541851  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:14.541859  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:14.541923  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:14.586965  142411 cri.go:89] found id: ""
	I0420 01:29:14.586990  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.587001  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:14.587008  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:14.587071  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:14.625251  142411 cri.go:89] found id: ""
	I0420 01:29:14.625279  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.625288  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:14.625294  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:14.625377  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:14.665038  142411 cri.go:89] found id: ""
	I0420 01:29:14.665067  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.665079  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:14.665086  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:14.665157  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:14.706931  142411 cri.go:89] found id: ""
	I0420 01:29:14.706964  142411 logs.go:276] 0 containers: []
	W0420 01:29:14.706978  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:14.706992  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:14.707044  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:14.761681  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:14.761717  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:14.776324  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:14.776350  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:14.856707  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:14.856727  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:14.856738  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:14.944019  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:14.944064  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:17.489112  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:17.507594  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:17.507660  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:17.556091  142411 cri.go:89] found id: ""
	I0420 01:29:17.556122  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.556132  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:17.556140  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:17.556205  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:17.600016  142411 cri.go:89] found id: ""
	I0420 01:29:17.600072  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.600086  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:17.600107  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:17.600171  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:17.643074  142411 cri.go:89] found id: ""
	I0420 01:29:17.643106  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.643118  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:17.643125  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:17.643190  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:17.684798  142411 cri.go:89] found id: ""
	I0420 01:29:17.684827  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.684838  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:17.684845  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:17.684910  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:17.725451  142411 cri.go:89] found id: ""
	I0420 01:29:17.725481  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.725494  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:17.725503  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:17.725575  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:17.765918  142411 cri.go:89] found id: ""
	I0420 01:29:17.765944  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.765952  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:17.765959  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:17.766023  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:17.806011  142411 cri.go:89] found id: ""
	I0420 01:29:17.806038  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.806049  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:17.806056  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:17.806122  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:17.848409  142411 cri.go:89] found id: ""
	I0420 01:29:17.848441  142411 logs.go:276] 0 containers: []
	W0420 01:29:17.848453  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:17.848465  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:17.848488  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:17.903854  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:17.903900  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:17.919156  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:17.919191  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:18.008073  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:18.008115  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:18.008133  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:18.095887  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:18.095929  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:14.687917  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:17.182326  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:16.039159  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:18.040487  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.540830  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:17.893270  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.392563  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:20.646919  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:20.664559  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:20.664635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:20.714440  142411 cri.go:89] found id: ""
	I0420 01:29:20.714472  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.714481  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:20.714487  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:20.714543  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:20.755249  142411 cri.go:89] found id: ""
	I0420 01:29:20.755276  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.755287  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:20.755294  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:20.755355  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:20.795744  142411 cri.go:89] found id: ""
	I0420 01:29:20.795777  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.795786  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:20.795797  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:20.795864  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:20.838083  142411 cri.go:89] found id: ""
	I0420 01:29:20.838111  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.838120  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:20.838128  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:20.838193  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:20.880198  142411 cri.go:89] found id: ""
	I0420 01:29:20.880227  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.880238  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:20.880245  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:20.880312  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:20.920496  142411 cri.go:89] found id: ""
	I0420 01:29:20.920522  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.920530  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:20.920536  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:20.920618  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:20.960137  142411 cri.go:89] found id: ""
	I0420 01:29:20.960170  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.960180  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:20.960186  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:20.960251  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:20.999583  142411 cri.go:89] found id: ""
	I0420 01:29:20.999624  142411 logs.go:276] 0 containers: []
	W0420 01:29:20.999637  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:20.999649  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:20.999665  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:21.077439  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:21.077476  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:21.121104  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:21.121148  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:21.173871  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:21.173909  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:21.189767  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:21.189795  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:21.264715  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:19.682554  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:21.682995  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:22.543452  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:25.040875  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:22.393626  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:24.894279  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:23.765605  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:23.782250  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:23.782334  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:23.827248  142411 cri.go:89] found id: ""
	I0420 01:29:23.827277  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.827285  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:23.827291  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:23.827349  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:23.867610  142411 cri.go:89] found id: ""
	I0420 01:29:23.867636  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.867645  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:23.867651  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:23.867712  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:23.906244  142411 cri.go:89] found id: ""
	I0420 01:29:23.906271  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.906278  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:23.906283  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:23.906343  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:23.952256  142411 cri.go:89] found id: ""
	I0420 01:29:23.952288  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.952306  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:23.952314  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:23.952378  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:23.992843  142411 cri.go:89] found id: ""
	I0420 01:29:23.992879  142411 logs.go:276] 0 containers: []
	W0420 01:29:23.992888  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:23.992896  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:23.992959  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:24.036460  142411 cri.go:89] found id: ""
	I0420 01:29:24.036493  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.036504  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:24.036512  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:24.036582  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:24.075910  142411 cri.go:89] found id: ""
	I0420 01:29:24.075944  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.075955  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:24.075962  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:24.076033  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:24.122638  142411 cri.go:89] found id: ""
	I0420 01:29:24.122676  142411 logs.go:276] 0 containers: []
	W0420 01:29:24.122688  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:24.122698  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:24.122717  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:24.138022  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:24.138061  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:24.220977  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:24.220998  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:24.221012  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:24.302928  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:24.302972  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:24.351237  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:24.351277  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:26.910354  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:26.926815  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:26.926900  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:26.966123  142411 cri.go:89] found id: ""
	I0420 01:29:26.966155  142411 logs.go:276] 0 containers: []
	W0420 01:29:26.966165  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:26.966172  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:26.966246  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:27.011679  142411 cri.go:89] found id: ""
	I0420 01:29:27.011714  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.011727  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:27.011735  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:27.011806  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:27.052116  142411 cri.go:89] found id: ""
	I0420 01:29:27.052141  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.052148  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:27.052155  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:27.052202  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:27.090375  142411 cri.go:89] found id: ""
	I0420 01:29:27.090404  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.090413  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:27.090419  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:27.090476  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:27.131911  142411 cri.go:89] found id: ""
	I0420 01:29:27.131946  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.131957  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:27.131965  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:27.132033  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:27.176663  142411 cri.go:89] found id: ""
	I0420 01:29:27.176696  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.176714  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:27.176723  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:27.176788  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:27.217806  142411 cri.go:89] found id: ""
	I0420 01:29:27.217836  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.217846  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:27.217853  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:27.217917  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:27.253956  142411 cri.go:89] found id: ""
	I0420 01:29:27.253981  142411 logs.go:276] 0 containers: []
	W0420 01:29:27.253989  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:27.253998  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:27.254014  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:27.298225  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:27.298264  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:27.351213  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:27.351259  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:27.366352  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:27.366388  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:27.466716  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:27.466742  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:27.466770  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:24.184743  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:26.681862  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:28.683193  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:27.042377  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:29.539413  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:27.395660  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:29.893947  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:30.050528  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:30.065697  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:30.065769  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:30.104643  142411 cri.go:89] found id: ""
	I0420 01:29:30.104675  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.104686  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:30.104694  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:30.104753  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:30.143864  142411 cri.go:89] found id: ""
	I0420 01:29:30.143892  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.143903  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:30.143910  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:30.143976  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:30.187925  142411 cri.go:89] found id: ""
	I0420 01:29:30.187954  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.187964  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:30.187972  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:30.188035  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:30.227968  142411 cri.go:89] found id: ""
	I0420 01:29:30.227995  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.228003  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:30.228009  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:30.228059  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:30.269550  142411 cri.go:89] found id: ""
	I0420 01:29:30.269584  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.269596  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:30.269604  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:30.269672  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:30.311777  142411 cri.go:89] found id: ""
	I0420 01:29:30.311810  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.311819  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:30.311827  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:30.311878  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:30.353569  142411 cri.go:89] found id: ""
	I0420 01:29:30.353601  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.353610  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:30.353617  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:30.353683  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:30.395003  142411 cri.go:89] found id: ""
	I0420 01:29:30.395032  142411 logs.go:276] 0 containers: []
	W0420 01:29:30.395043  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:30.395054  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:30.395066  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:30.455495  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:30.455536  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:30.473749  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:30.473778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:30.555370  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:30.555397  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:30.555417  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:30.637079  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:30.637124  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:33.188917  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:33.203689  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:33.203757  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:33.246796  142411 cri.go:89] found id: ""
	I0420 01:29:33.246828  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.246840  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:33.246848  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:33.246911  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:33.284667  142411 cri.go:89] found id: ""
	I0420 01:29:33.284700  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.284712  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:33.284720  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:33.284782  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:33.328653  142411 cri.go:89] found id: ""
	I0420 01:29:33.328688  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.328701  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:33.328709  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:33.328777  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:33.369081  142411 cri.go:89] found id: ""
	I0420 01:29:33.369107  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.369121  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:33.369130  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:33.369180  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:30.684861  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:32.689885  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:31.547492  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:34.040445  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:31.894902  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:34.392071  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:33.414282  142411 cri.go:89] found id: ""
	I0420 01:29:33.414313  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.414322  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:33.414327  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:33.414411  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:33.457086  142411 cri.go:89] found id: ""
	I0420 01:29:33.457112  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.457119  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:33.457126  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:33.457176  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:33.498686  142411 cri.go:89] found id: ""
	I0420 01:29:33.498716  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.498729  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:33.498738  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:33.498808  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:33.538872  142411 cri.go:89] found id: ""
	I0420 01:29:33.538907  142411 logs.go:276] 0 containers: []
	W0420 01:29:33.538920  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:33.538932  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:33.538959  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:33.592586  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:33.592631  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:33.609200  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:33.609226  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:33.690795  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:33.690820  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:33.690836  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:33.776092  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:33.776131  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:36.331256  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:36.348813  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:36.348892  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:36.397503  142411 cri.go:89] found id: ""
	I0420 01:29:36.397527  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.397534  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:36.397540  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:36.397603  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:36.439638  142411 cri.go:89] found id: ""
	I0420 01:29:36.439667  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.439675  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:36.439685  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:36.439761  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:36.477155  142411 cri.go:89] found id: ""
	I0420 01:29:36.477182  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.477194  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:36.477201  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:36.477259  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:36.533326  142411 cri.go:89] found id: ""
	I0420 01:29:36.533360  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.533373  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:36.533381  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:36.533446  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:36.573056  142411 cri.go:89] found id: ""
	I0420 01:29:36.573093  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.573107  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:36.573114  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:36.573177  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:36.611901  142411 cri.go:89] found id: ""
	I0420 01:29:36.611937  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.611949  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:36.611957  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:36.612017  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:36.656780  142411 cri.go:89] found id: ""
	I0420 01:29:36.656810  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.656823  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:36.656830  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:36.656899  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:36.699872  142411 cri.go:89] found id: ""
	I0420 01:29:36.699906  142411 logs.go:276] 0 containers: []
	W0420 01:29:36.699916  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:36.699928  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:36.699943  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:36.758859  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:36.758895  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:36.775108  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:36.775145  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:36.858001  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:36.858027  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:36.858044  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:36.936114  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:36.936154  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:35.182481  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:37.182529  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:36.041125  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:38.043465  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:40.540023  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:36.395316  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:38.894062  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:40.894416  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:39.487167  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:39.502929  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:39.502995  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:39.547338  142411 cri.go:89] found id: ""
	I0420 01:29:39.547363  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.547371  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:39.547377  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:39.547430  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:39.608684  142411 cri.go:89] found id: ""
	I0420 01:29:39.608714  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.608722  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:39.608728  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:39.608793  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:39.679248  142411 cri.go:89] found id: ""
	I0420 01:29:39.679281  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.679292  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:39.679300  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:39.679361  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:39.725226  142411 cri.go:89] found id: ""
	I0420 01:29:39.725257  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.725270  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:39.725278  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:39.725363  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:39.767653  142411 cri.go:89] found id: ""
	I0420 01:29:39.767681  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.767690  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:39.767697  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:39.767760  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:39.807848  142411 cri.go:89] found id: ""
	I0420 01:29:39.807885  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.807893  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:39.807900  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:39.807968  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:39.847171  142411 cri.go:89] found id: ""
	I0420 01:29:39.847201  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.847212  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:39.847219  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:39.847284  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:39.884959  142411 cri.go:89] found id: ""
	I0420 01:29:39.884996  142411 logs.go:276] 0 containers: []
	W0420 01:29:39.885007  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:39.885034  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:39.885050  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:39.959245  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:39.959269  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:39.959286  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:40.041394  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:40.041436  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:40.083125  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:40.083171  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:40.139902  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:40.139957  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:42.657038  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:42.673303  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:42.673407  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:42.717081  142411 cri.go:89] found id: ""
	I0420 01:29:42.717106  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.717114  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:42.717120  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:42.717170  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:42.762322  142411 cri.go:89] found id: ""
	I0420 01:29:42.762357  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.762367  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:42.762375  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:42.762442  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:42.805059  142411 cri.go:89] found id: ""
	I0420 01:29:42.805112  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.805122  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:42.805131  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:42.805201  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:42.848539  142411 cri.go:89] found id: ""
	I0420 01:29:42.848568  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.848580  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:42.848587  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:42.848679  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:42.887915  142411 cri.go:89] found id: ""
	I0420 01:29:42.887949  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.887960  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:42.887967  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:42.888032  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:42.938832  142411 cri.go:89] found id: ""
	I0420 01:29:42.938867  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.938878  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:42.938888  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:42.938957  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:42.982376  142411 cri.go:89] found id: ""
	I0420 01:29:42.982402  142411 logs.go:276] 0 containers: []
	W0420 01:29:42.982409  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:42.982415  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:42.982477  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:43.023264  142411 cri.go:89] found id: ""
	I0420 01:29:43.023293  142411 logs.go:276] 0 containers: []
	W0420 01:29:43.023301  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:43.023313  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:43.023326  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:43.079673  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:43.079714  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:43.094753  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:43.094786  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:43.180113  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:43.180149  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:43.180177  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:43.259830  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:43.259872  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:39.182568  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:41.186805  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:43.683131  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:42.540687  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.039857  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:43.392948  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.394081  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:45.802515  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:45.816908  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:45.816965  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:45.861091  142411 cri.go:89] found id: ""
	I0420 01:29:45.861123  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.861132  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:45.861138  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:45.861224  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:45.901677  142411 cri.go:89] found id: ""
	I0420 01:29:45.901702  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.901710  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:45.901716  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:45.901767  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:45.938301  142411 cri.go:89] found id: ""
	I0420 01:29:45.938325  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.938334  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:45.938339  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:45.938393  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:45.978432  142411 cri.go:89] found id: ""
	I0420 01:29:45.978460  142411 logs.go:276] 0 containers: []
	W0420 01:29:45.978473  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:45.978479  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:45.978537  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:46.019410  142411 cri.go:89] found id: ""
	I0420 01:29:46.019446  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.019455  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:46.019461  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:46.019524  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:46.071002  142411 cri.go:89] found id: ""
	I0420 01:29:46.071032  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.071041  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:46.071052  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:46.071124  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:46.110362  142411 cri.go:89] found id: ""
	I0420 01:29:46.110391  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.110402  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:46.110409  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:46.110477  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:46.152276  142411 cri.go:89] found id: ""
	I0420 01:29:46.152311  142411 logs.go:276] 0 containers: []
	W0420 01:29:46.152322  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:46.152334  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:46.152351  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:46.205121  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:46.205159  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:46.221808  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:46.221842  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:46.300394  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:46.300418  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:46.300434  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:46.391961  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:46.392002  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:45.684038  141927 pod_ready.go:102] pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:48.176081  141927 pod_ready.go:81] duration metric: took 4m0.00056563s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" ...
	E0420 01:29:48.176112  141927 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-rqqlt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:29:48.176130  141927 pod_ready.go:38] duration metric: took 4m7.024291569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:29:48.176166  141927 kubeadm.go:591] duration metric: took 4m16.819079549s to restartPrimaryControlPlane
	W0420 01:29:48.176256  141927 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:29:48.176291  141927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:29:47.040255  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:49.043956  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:47.893875  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:49.894291  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:48.945086  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:48.961414  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:48.961491  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:49.010230  142411 cri.go:89] found id: ""
	I0420 01:29:49.010285  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.010299  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:49.010309  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:49.010385  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:49.054455  142411 cri.go:89] found id: ""
	I0420 01:29:49.054481  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.054491  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:49.054499  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:49.054566  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:49.094536  142411 cri.go:89] found id: ""
	I0420 01:29:49.094562  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.094572  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:49.094580  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:49.094740  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:49.134004  142411 cri.go:89] found id: ""
	I0420 01:29:49.134035  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.134046  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:49.134054  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:49.134118  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:49.173697  142411 cri.go:89] found id: ""
	I0420 01:29:49.173728  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.173741  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:49.173750  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:49.173817  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:49.215655  142411 cri.go:89] found id: ""
	I0420 01:29:49.215681  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.215689  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:49.215695  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:49.215745  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:49.258282  142411 cri.go:89] found id: ""
	I0420 01:29:49.258312  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.258324  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:49.258332  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:49.258394  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:49.298565  142411 cri.go:89] found id: ""
	I0420 01:29:49.298597  142411 logs.go:276] 0 containers: []
	W0420 01:29:49.298608  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:49.298620  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:49.298638  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:49.378833  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:49.378862  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:49.378880  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:49.467477  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:49.467517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:49.521747  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:49.521788  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:49.583386  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:49.583436  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:52.102969  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:52.122971  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:52.123053  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:52.166166  142411 cri.go:89] found id: ""
	I0420 01:29:52.166199  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.166210  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:52.166219  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:52.166287  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:52.206790  142411 cri.go:89] found id: ""
	I0420 01:29:52.206817  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.206824  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:52.206830  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:52.206889  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:52.249879  142411 cri.go:89] found id: ""
	I0420 01:29:52.249911  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.249921  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:52.249931  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:52.249997  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:52.293953  142411 cri.go:89] found id: ""
	I0420 01:29:52.293997  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.294009  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:52.294018  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:52.294095  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:52.339447  142411 cri.go:89] found id: ""
	I0420 01:29:52.339478  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.339490  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:52.339497  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:52.339558  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:52.378383  142411 cri.go:89] found id: ""
	I0420 01:29:52.378416  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.378428  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:52.378435  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:52.378488  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:52.423079  142411 cri.go:89] found id: ""
	I0420 01:29:52.423121  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.423130  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:52.423137  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:52.423205  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:52.459525  142411 cri.go:89] found id: ""
	I0420 01:29:52.459559  142411 logs.go:276] 0 containers: []
	W0420 01:29:52.459572  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:52.459594  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:52.459610  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:52.567141  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:52.567186  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:52.618194  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:52.618235  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:52.681921  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:52.681959  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:52.699065  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:52.699108  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:52.776829  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:51.540922  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:54.043224  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:52.397218  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:54.895147  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:55.277933  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:55.293380  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:55.293455  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:55.337443  142411 cri.go:89] found id: ""
	I0420 01:29:55.337475  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.337483  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:55.337491  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:55.337557  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:55.375911  142411 cri.go:89] found id: ""
	I0420 01:29:55.375942  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.375951  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:55.375957  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:55.376022  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:55.418545  142411 cri.go:89] found id: ""
	I0420 01:29:55.418569  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.418577  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:55.418583  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:55.418635  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:55.459343  142411 cri.go:89] found id: ""
	I0420 01:29:55.459378  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.459390  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:55.459397  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:55.459452  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:55.503851  142411 cri.go:89] found id: ""
	I0420 01:29:55.503878  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.503887  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:55.503895  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:55.503959  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:55.542533  142411 cri.go:89] found id: ""
	I0420 01:29:55.542556  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.542562  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:55.542568  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:55.542623  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:55.582205  142411 cri.go:89] found id: ""
	I0420 01:29:55.582236  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.582246  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:55.582252  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:55.582314  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:55.624727  142411 cri.go:89] found id: ""
	I0420 01:29:55.624757  142411 logs.go:276] 0 containers: []
	W0420 01:29:55.624769  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:55.624781  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:55.624803  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:55.675403  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:55.675438  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:55.691492  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:55.691516  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:55.772283  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:55.772313  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:55.772330  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:55.859440  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:55.859477  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:29:56.543221  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:59.041874  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:57.393723  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:59.894390  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:29:58.406009  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:29:58.422305  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:29:58.422382  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:29:58.468206  142411 cri.go:89] found id: ""
	I0420 01:29:58.468303  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.468321  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:29:58.468329  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:29:58.468402  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:29:58.513981  142411 cri.go:89] found id: ""
	I0420 01:29:58.514018  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.514027  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:29:58.514041  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:29:58.514105  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:29:58.559967  142411 cri.go:89] found id: ""
	I0420 01:29:58.560000  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.560011  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:29:58.560019  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:29:58.560084  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:29:58.600710  142411 cri.go:89] found id: ""
	I0420 01:29:58.600744  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.600763  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:29:58.600771  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:29:58.600834  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:29:58.645995  142411 cri.go:89] found id: ""
	I0420 01:29:58.646022  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.646030  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:29:58.646036  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:29:58.646097  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:29:58.684930  142411 cri.go:89] found id: ""
	I0420 01:29:58.684957  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.684965  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:29:58.684972  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:29:58.685022  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:29:58.727225  142411 cri.go:89] found id: ""
	I0420 01:29:58.727251  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.727259  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:29:58.727265  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:29:58.727319  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:29:58.765244  142411 cri.go:89] found id: ""
	I0420 01:29:58.765282  142411 logs.go:276] 0 containers: []
	W0420 01:29:58.765293  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:29:58.765303  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:29:58.765330  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:29:58.817791  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:29:58.817822  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:29:58.832882  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:29:58.832926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:29:58.919297  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:29:58.919325  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:29:58.919342  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:29:59.002590  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:29:59.002637  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:01.551854  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:01.568974  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:01.569054  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:01.609165  142411 cri.go:89] found id: ""
	I0420 01:30:01.609191  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.609200  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:01.609206  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:01.609272  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:01.653349  142411 cri.go:89] found id: ""
	I0420 01:30:01.653383  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.653396  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:01.653405  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:01.653482  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:01.698961  142411 cri.go:89] found id: ""
	I0420 01:30:01.698991  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.699002  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:01.699009  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:01.699063  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:01.739230  142411 cri.go:89] found id: ""
	I0420 01:30:01.739271  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.739283  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:01.739292  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:01.739376  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:01.781839  142411 cri.go:89] found id: ""
	I0420 01:30:01.781873  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.781885  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:01.781893  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:01.781960  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:01.821212  142411 cri.go:89] found id: ""
	I0420 01:30:01.821241  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.821252  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:01.821259  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:01.821339  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:01.859959  142411 cri.go:89] found id: ""
	I0420 01:30:01.859984  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.859993  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:01.859999  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:01.860060  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:01.898832  142411 cri.go:89] found id: ""
	I0420 01:30:01.898858  142411 logs.go:276] 0 containers: []
	W0420 01:30:01.898865  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:01.898875  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:01.898886  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:01.943065  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:01.943156  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:01.995618  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:01.995654  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:02.010489  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:02.010517  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:02.090181  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:02.090222  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:02.090238  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:01.541135  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.041977  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:02.394456  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.894450  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:04.671376  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:04.687535  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:04.687629  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:04.728732  142411 cri.go:89] found id: ""
	I0420 01:30:04.728765  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.728778  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:04.728786  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:04.728854  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:04.768537  142411 cri.go:89] found id: ""
	I0420 01:30:04.768583  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.768602  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:04.768610  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:04.768676  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:04.811714  142411 cri.go:89] found id: ""
	I0420 01:30:04.811741  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.811750  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:04.811756  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:04.811816  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:04.852324  142411 cri.go:89] found id: ""
	I0420 01:30:04.852360  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.852371  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:04.852379  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:04.852452  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:04.891657  142411 cri.go:89] found id: ""
	I0420 01:30:04.891688  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.891700  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:04.891708  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:04.891774  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:04.933192  142411 cri.go:89] found id: ""
	I0420 01:30:04.933222  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.933230  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:04.933236  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:04.933291  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:04.972796  142411 cri.go:89] found id: ""
	I0420 01:30:04.972819  142411 logs.go:276] 0 containers: []
	W0420 01:30:04.972828  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:04.972834  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:04.972888  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:05.014782  142411 cri.go:89] found id: ""
	I0420 01:30:05.014821  142411 logs.go:276] 0 containers: []
	W0420 01:30:05.014833  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:05.014846  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:05.014862  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:05.067438  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:05.067470  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:05.121336  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:05.121371  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:05.137495  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:05.137529  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:05.214132  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:05.214153  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:05.214170  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:07.796964  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:07.810856  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:07.810917  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:07.846993  142411 cri.go:89] found id: ""
	I0420 01:30:07.847024  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.847033  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:07.847040  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:07.847089  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:07.886422  142411 cri.go:89] found id: ""
	I0420 01:30:07.886452  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.886464  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:07.886474  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:07.886567  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:07.942200  142411 cri.go:89] found id: ""
	I0420 01:30:07.942230  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.942238  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:07.942245  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:07.942296  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:07.980179  142411 cri.go:89] found id: ""
	I0420 01:30:07.980215  142411 logs.go:276] 0 containers: []
	W0420 01:30:07.980226  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:07.980235  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:07.980299  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:08.020097  142411 cri.go:89] found id: ""
	I0420 01:30:08.020130  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.020140  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:08.020145  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:08.020215  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:08.063793  142411 cri.go:89] found id: ""
	I0420 01:30:08.063837  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.063848  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:08.063857  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:08.063930  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:08.108674  142411 cri.go:89] found id: ""
	I0420 01:30:08.108705  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.108716  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:08.108724  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:08.108798  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:08.147467  142411 cri.go:89] found id: ""
	I0420 01:30:08.147495  142411 logs.go:276] 0 containers: []
	W0420 01:30:08.147503  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:08.147512  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:08.147525  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:08.239416  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:08.239466  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:08.294639  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:08.294669  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:08.349753  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:08.349795  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:08.368971  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:08.369003  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 01:30:06.540958  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:08.541701  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:06.898857  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:09.397590  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	W0420 01:30:08.449996  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:10.950318  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:10.964969  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:10.965032  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:11.006321  142411 cri.go:89] found id: ""
	I0420 01:30:11.006354  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.006365  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:11.006375  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:11.006437  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:11.047982  142411 cri.go:89] found id: ""
	I0420 01:30:11.048010  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.048019  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:11.048025  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:11.048073  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:11.089185  142411 cri.go:89] found id: ""
	I0420 01:30:11.089217  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.089226  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:11.089232  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:11.089287  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:11.131293  142411 cri.go:89] found id: ""
	I0420 01:30:11.131322  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.131335  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:11.131344  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:11.131398  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:11.170394  142411 cri.go:89] found id: ""
	I0420 01:30:11.170419  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.170427  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:11.170432  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:11.170485  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:11.210580  142411 cri.go:89] found id: ""
	I0420 01:30:11.210619  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.210631  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:11.210640  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:11.210706  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:11.251938  142411 cri.go:89] found id: ""
	I0420 01:30:11.251977  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.251990  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:11.251998  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:11.252064  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:11.295999  142411 cri.go:89] found id: ""
	I0420 01:30:11.296033  142411 logs.go:276] 0 containers: []
	W0420 01:30:11.296045  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:11.296057  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:11.296072  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:11.378564  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:11.378632  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:11.422836  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:11.422868  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:11.475893  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:11.475928  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:11.491524  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:11.491555  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:11.569066  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:11.041078  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:13.540339  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:15.541762  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:11.893724  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:14.394206  142057 pod_ready.go:102] pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:14.886464  142057 pod_ready.go:81] duration metric: took 4m0.00077804s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" ...
	E0420 01:30:14.886500  142057 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8s79l" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:30:14.886528  142057 pod_ready.go:38] duration metric: took 4m14.554070758s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:14.886572  142057 kubeadm.go:591] duration metric: took 4m22.173690393s to restartPrimaryControlPlane
	W0420 01:30:14.886657  142057 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:30:14.886691  142057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:30:14.070158  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:14.086000  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:14.086067  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:14.128864  142411 cri.go:89] found id: ""
	I0420 01:30:14.128894  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.128906  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:14.128914  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:14.128986  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:14.169447  142411 cri.go:89] found id: ""
	I0420 01:30:14.169482  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.169497  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:14.169506  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:14.169583  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:14.210007  142411 cri.go:89] found id: ""
	I0420 01:30:14.210043  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.210054  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:14.210062  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:14.210119  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:14.247652  142411 cri.go:89] found id: ""
	I0420 01:30:14.247685  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.247695  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:14.247703  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:14.247764  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:14.290788  142411 cri.go:89] found id: ""
	I0420 01:30:14.290820  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.290830  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:14.290847  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:14.290908  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:14.351514  142411 cri.go:89] found id: ""
	I0420 01:30:14.351548  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.351570  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:14.351581  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:14.351637  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:14.423481  142411 cri.go:89] found id: ""
	I0420 01:30:14.423520  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.423534  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:14.423543  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:14.423615  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:14.465597  142411 cri.go:89] found id: ""
	I0420 01:30:14.465622  142411 logs.go:276] 0 containers: []
	W0420 01:30:14.465630  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:14.465639  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:14.465655  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:14.522669  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:14.522705  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:14.541258  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:14.541293  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:14.618657  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:14.618678  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:14.618691  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:14.702616  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:14.702658  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:17.256212  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:17.277171  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:17.277250  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:17.321548  142411 cri.go:89] found id: ""
	I0420 01:30:17.321582  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.321600  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:17.321607  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:17.321676  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:17.362856  142411 cri.go:89] found id: ""
	I0420 01:30:17.362883  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.362890  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:17.362896  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:17.362966  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:17.409494  142411 cri.go:89] found id: ""
	I0420 01:30:17.409525  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.409539  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:17.409548  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:17.409631  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:17.447759  142411 cri.go:89] found id: ""
	I0420 01:30:17.447801  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.447812  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:17.447819  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:17.447885  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:17.498416  142411 cri.go:89] found id: ""
	I0420 01:30:17.498444  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.498454  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:17.498460  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:17.498528  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:17.546025  142411 cri.go:89] found id: ""
	I0420 01:30:17.546055  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.546064  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:17.546072  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:17.546138  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:17.585797  142411 cri.go:89] found id: ""
	I0420 01:30:17.585829  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.585840  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:17.585848  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:17.585919  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:17.630850  142411 cri.go:89] found id: ""
	I0420 01:30:17.630886  142411 logs.go:276] 0 containers: []
	W0420 01:30:17.630899  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:17.630911  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:17.630926  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:17.689472  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:17.689510  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:17.705603  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:17.705642  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:17.794094  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:17.794137  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:17.794155  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:17.879397  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:17.879435  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:18.041437  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:20.044174  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:20.428142  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:20.444936  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:30:20.445018  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:30:20.487317  142411 cri.go:89] found id: ""
	I0420 01:30:20.487354  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.487365  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:30:20.487373  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:30:20.487443  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:30:20.537209  142411 cri.go:89] found id: ""
	I0420 01:30:20.537241  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.537254  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:30:20.537262  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:30:20.537348  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:30:20.584311  142411 cri.go:89] found id: ""
	I0420 01:30:20.584343  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.584352  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:30:20.584357  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:30:20.584413  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:30:20.631915  142411 cri.go:89] found id: ""
	I0420 01:30:20.631948  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.631959  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:30:20.631969  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:30:20.632040  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:30:20.679680  142411 cri.go:89] found id: ""
	I0420 01:30:20.679707  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.679716  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:30:20.679721  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:30:20.679770  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:30:20.724967  142411 cri.go:89] found id: ""
	I0420 01:30:20.725002  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.725013  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:30:20.725027  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:30:20.725091  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:30:20.772717  142411 cri.go:89] found id: ""
	I0420 01:30:20.772751  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.772762  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:30:20.772771  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:30:20.772837  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:30:20.812421  142411 cri.go:89] found id: ""
	I0420 01:30:20.812449  142411 logs.go:276] 0 containers: []
	W0420 01:30:20.812460  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:30:20.812471  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:30:20.812485  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:30:20.870522  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:30:20.870554  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:30:20.886764  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:30:20.886793  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:30:20.963941  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:30:20.963964  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:30:20.963979  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:30:21.045738  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:30:21.045778  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:30:20.850989  141927 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.674674204s)
	I0420 01:30:20.851082  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:20.868537  141927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:20.880284  141927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:20.891650  141927 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:20.891672  141927 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:20.891726  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0420 01:30:20.902443  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:20.902509  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:20.913476  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0420 01:30:20.923762  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:20.923836  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:20.934281  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0420 01:30:20.944194  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:20.944254  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:20.955506  141927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0420 01:30:20.968039  141927 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:20.968107  141927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:20.978918  141927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:21.214688  141927 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:30:22.539778  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:24.543547  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:23.600037  142411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:23.616539  142411 kubeadm.go:591] duration metric: took 4m4.142686832s to restartPrimaryControlPlane
	W0420 01:30:23.616641  142411 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:30:23.616676  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:30:25.481285  142411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.864573977s)
	I0420 01:30:25.481385  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:25.500950  142411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:25.518624  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:25.532506  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:25.532531  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:25.532584  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:30:25.546634  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:25.546708  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:25.561379  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:30:25.575506  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:25.575627  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:25.590615  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:30:25.604855  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:25.604923  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:25.619717  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:30:25.634525  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:25.634607  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:25.649408  142411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:25.735636  142411 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:30:25.735697  142411 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:25.913199  142411 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:25.913347  142411 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:25.913483  142411 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:26.120240  142411 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:26.122066  142411 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:26.122169  142411 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:26.122279  142411 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:26.122395  142411 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:26.122499  142411 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:26.122623  142411 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:26.122715  142411 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:26.122806  142411 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:26.122898  142411 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:26.122999  142411 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:26.123113  142411 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:26.123173  142411 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:26.123244  142411 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:26.243908  142411 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:26.354349  142411 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:26.605778  142411 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:26.833914  142411 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:26.855348  142411 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:26.857029  142411 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:26.857250  142411 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:27.010707  142411 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:27.012314  142411 out.go:204]   - Booting up control plane ...
	I0420 01:30:27.012456  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:27.036284  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:27.049123  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:27.050561  142411 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:27.053222  142411 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:30:30.213456  141927 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:30:30.213557  141927 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:30.213687  141927 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:30.213826  141927 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:30.213915  141927 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:30.213978  141927 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:30.215501  141927 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:30.215594  141927 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:30.215667  141927 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:30.215802  141927 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:30.215886  141927 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:30.215960  141927 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:30.216018  141927 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:30.216097  141927 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:30.216156  141927 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:30.216258  141927 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:30.216350  141927 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:30.216385  141927 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:30.216447  141927 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:30.216517  141927 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:30.216589  141927 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:30:30.216653  141927 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:30.216743  141927 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:30.216832  141927 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:30.216933  141927 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:30.217019  141927 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:30.218228  141927 out.go:204]   - Booting up control plane ...
	I0420 01:30:30.218341  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:30.218446  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:30.218516  141927 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:30.218615  141927 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:30.218703  141927 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:30.218753  141927 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:30.218904  141927 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:30:30.218975  141927 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:30:30.219027  141927 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001925972s
	I0420 01:30:30.219128  141927 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:30:30.219216  141927 kubeadm.go:309] [api-check] The API server is healthy after 5.502367015s
	I0420 01:30:30.219336  141927 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:30:30.219504  141927 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:30:30.219576  141927 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:30:30.219816  141927 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-907988 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:30:30.219880  141927 kubeadm.go:309] [bootstrap-token] Using token: ozlrl4.y5r3psi4bnl35gso
	I0420 01:30:30.221283  141927 out.go:204]   - Configuring RBAC rules ...
	I0420 01:30:30.221416  141927 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:30:30.221533  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:30:30.221728  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:30:30.221968  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:30:30.222146  141927 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:30:30.222255  141927 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:30:30.222385  141927 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:30:30.222455  141927 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:30:30.222524  141927 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:30:30.222534  141927 kubeadm.go:309] 
	I0420 01:30:30.222614  141927 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:30:30.222628  141927 kubeadm.go:309] 
	I0420 01:30:30.222692  141927 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:30:30.222699  141927 kubeadm.go:309] 
	I0420 01:30:30.222723  141927 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:30:30.222772  141927 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:30:30.222815  141927 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:30:30.222821  141927 kubeadm.go:309] 
	I0420 01:30:30.222878  141927 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:30:30.222885  141927 kubeadm.go:309] 
	I0420 01:30:30.222923  141927 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:30:30.222929  141927 kubeadm.go:309] 
	I0420 01:30:30.222994  141927 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:30:30.223100  141927 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:30:30.223171  141927 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:30:30.223189  141927 kubeadm.go:309] 
	I0420 01:30:30.223281  141927 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:30:30.223346  141927 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:30:30.223354  141927 kubeadm.go:309] 
	I0420 01:30:30.223423  141927 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token ozlrl4.y5r3psi4bnl35gso \
	I0420 01:30:30.223527  141927 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:30:30.223552  141927 kubeadm.go:309] 	--control-plane 
	I0420 01:30:30.223559  141927 kubeadm.go:309] 
	I0420 01:30:30.223627  141927 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:30:30.223635  141927 kubeadm.go:309] 
	I0420 01:30:30.223704  141927 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token ozlrl4.y5r3psi4bnl35gso \
	I0420 01:30:30.223811  141927 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:30:30.223826  141927 cni.go:84] Creating CNI manager for ""
	I0420 01:30:30.223833  141927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:30:30.225184  141927 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:30:27.041383  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:29.540967  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:30.226237  141927 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:30:30.241388  141927 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:30:30.274356  141927 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:30:30.274469  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:30.274503  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-907988 minikube.k8s.io/updated_at=2024_04_20T01_30_30_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=default-k8s-diff-port-907988 minikube.k8s.io/primary=true
	I0420 01:30:30.319402  141927 ops.go:34] apiserver oom_adj: -16
	I0420 01:30:30.505362  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:31.006101  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:31.505679  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.005947  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.505747  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:33.005919  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:33.505449  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:34.006029  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:32.040710  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:34.541175  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:34.505846  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:35.006187  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:35.505618  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:36.005994  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:36.506217  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.006428  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.506359  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:38.006018  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:38.505454  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:39.006426  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:37.041157  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:39.542266  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:39.506227  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:40.005941  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:40.506123  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:41.006198  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:41.506244  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:42.006045  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:42.505458  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:43.006082  141927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:43.122481  141927 kubeadm.go:1107] duration metric: took 12.84807935s to wait for elevateKubeSystemPrivileges
	W0420 01:30:43.122525  141927 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:30:43.122535  141927 kubeadm.go:393] duration metric: took 5m11.83456536s to StartCluster
	I0420 01:30:43.122559  141927 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:30:43.122689  141927 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:30:43.124746  141927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:30:43.125059  141927 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:30:43.126572  141927 out.go:177] * Verifying Kubernetes components...
	I0420 01:30:43.125129  141927 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:30:43.125301  141927 config.go:182] Loaded profile config "default-k8s-diff-port-907988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:30:43.128187  141927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:30:43.128231  141927 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-907988"
	I0420 01:30:43.128240  141927 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-907988"
	I0420 01:30:43.128277  141927 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-907988"
	I0420 01:30:43.128278  141927 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-907988"
	W0420 01:30:43.128288  141927 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:30:43.128302  141927 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-907988"
	I0420 01:30:43.128352  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.128769  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.128795  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.128840  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.128800  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.128306  141927 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-907988"
	W0420 01:30:43.128994  141927 addons.go:243] addon metrics-server should already be in state true
	I0420 01:30:43.129026  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.129378  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.129401  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.148251  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41797
	I0420 01:30:43.148272  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39865
	I0420 01:30:43.148503  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33785
	I0420 01:30:43.148959  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.148985  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.149060  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.149605  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149626  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.149683  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149688  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.149698  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.149706  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.150105  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150108  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150106  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.150358  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.150703  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.150733  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.150760  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.150798  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.154242  141927 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-907988"
	W0420 01:30:43.154266  141927 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:30:43.154300  141927 host.go:66] Checking if "default-k8s-diff-port-907988" exists ...
	I0420 01:30:43.154673  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.154715  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.167283  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46477
	I0420 01:30:43.167925  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.168475  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.168496  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.168868  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.169094  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.171067  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45101
	I0420 01:30:43.171384  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.173102  141927 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:30:43.171760  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.172823  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I0420 01:30:43.174639  141927 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:30:43.174661  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:30:43.174681  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.174859  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.175307  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.175331  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.175460  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.175476  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.175799  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.175992  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.176361  141927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:30:43.176376  141927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:30:43.176686  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.178744  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.178848  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.180048  141927 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:30:43.179462  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.181257  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:30:43.181275  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:30:43.181289  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.181296  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.179641  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.182168  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.182437  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.182627  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.184562  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.184958  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.184985  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.185241  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.185430  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.185621  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.185771  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.195778  141927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I0420 01:30:43.196419  141927 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:30:43.196979  141927 main.go:141] libmachine: Using API Version  1
	I0420 01:30:43.197002  141927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:30:43.197763  141927 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:30:43.198072  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetState
	I0420 01:30:43.200177  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .DriverName
	I0420 01:30:43.200480  141927 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:30:43.200497  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:30:43.200516  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHHostname
	I0420 01:30:43.204078  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHPort
	I0420 01:30:43.204128  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.204154  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:22:6d", ip: ""} in network mk-default-k8s-diff-port-907988: {Iface:virbr1 ExpiryTime:2024-04-20 02:25:15 +0000 UTC Type:0 Mac:52:54:00:c7:22:6d Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-907988 Clientid:01:52:54:00:c7:22:6d}
	I0420 01:30:43.204178  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | domain default-k8s-diff-port-907988 has defined IP address 192.168.39.222 and MAC address 52:54:00:c7:22:6d in network mk-default-k8s-diff-port-907988
	I0420 01:30:43.204275  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHKeyPath
	I0420 01:30:43.204456  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .GetSSHUsername
	I0420 01:30:43.204582  141927 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/default-k8s-diff-port-907988/id_rsa Username:docker}
	I0420 01:30:43.375731  141927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:30:43.424911  141927 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-907988" to be "Ready" ...
	I0420 01:30:43.436729  141927 node_ready.go:49] node "default-k8s-diff-port-907988" has status "Ready":"True"
	I0420 01:30:43.436750  141927 node_ready.go:38] duration metric: took 11.810027ms for node "default-k8s-diff-port-907988" to be "Ready" ...
	I0420 01:30:43.436759  141927 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:43.445452  141927 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:43.497224  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:30:43.526236  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:30:43.527573  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:30:43.527597  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:30:43.591844  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:30:43.591872  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:30:43.655692  141927 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:30:43.655721  141927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:30:43.824523  141927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:30:44.808651  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.311370016s)
	I0420 01:30:44.808721  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.808724  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.282444767s)
	I0420 01:30:44.808735  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.808767  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.808783  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809052  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809066  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809074  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.809081  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809144  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809162  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809170  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.809179  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.809626  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809635  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.809647  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809655  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:44.809626  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Closing plugin on server side
	I0420 01:30:44.833935  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:44.833963  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:44.834326  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:44.834348  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.316084  141927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.491512905s)
	I0420 01:30:45.316157  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:45.316177  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:45.316514  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:45.316539  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.316593  141927 main.go:141] libmachine: Making call to close driver server
	I0420 01:30:45.316610  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) Calling .Close
	I0420 01:30:45.316910  141927 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:30:45.316989  141927 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:30:45.317007  141927 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-907988"
	I0420 01:30:45.316906  141927 main.go:141] libmachine: (default-k8s-diff-port-907988) DBG | Closing plugin on server side
	I0420 01:30:45.319289  141927 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0420 01:30:42.040865  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:44.042663  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:45.320468  141927 addons.go:505] duration metric: took 2.195343987s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0420 01:30:45.453717  141927 pod_ready.go:102] pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:45.952010  141927 pod_ready.go:92] pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.952032  141927 pod_ready.go:81] duration metric: took 2.506556645s for pod "coredns-7db6d8ff4d-g2nzn" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.952040  141927 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.957512  141927 pod_ready.go:92] pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.957533  141927 pod_ready.go:81] duration metric: took 5.486362ms for pod "coredns-7db6d8ff4d-p8dhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.957541  141927 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.962790  141927 pod_ready.go:92] pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.962810  141927 pod_ready.go:81] duration metric: took 5.261485ms for pod "etcd-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.962821  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.968720  141927 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.968743  141927 pod_ready.go:81] duration metric: took 5.914425ms for pod "kube-apiserver-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.968754  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.976930  141927 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:45.976946  141927 pod_ready.go:81] duration metric: took 8.183898ms for pod "kube-controller-manager-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:45.976954  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jt8wr" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.350179  141927 pod_ready.go:92] pod "kube-proxy-jt8wr" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:46.350203  141927 pod_ready.go:81] duration metric: took 373.241134ms for pod "kube-proxy-jt8wr" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.350212  141927 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.749542  141927 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace has status "Ready":"True"
	I0420 01:30:46.749566  141927 pod_ready.go:81] duration metric: took 399.34726ms for pod "kube-scheduler-default-k8s-diff-port-907988" in "kube-system" namespace to be "Ready" ...
	I0420 01:30:46.749573  141927 pod_ready.go:38] duration metric: took 3.312805349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:30:46.749587  141927 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:30:46.749647  141927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:30:46.785318  141927 api_server.go:72] duration metric: took 3.660207577s to wait for apiserver process to appear ...
	I0420 01:30:46.785349  141927 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:30:46.785373  141927 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0420 01:30:46.793933  141927 api_server.go:279] https://192.168.39.222:8444/healthz returned 200:
	ok
	I0420 01:30:46.794890  141927 api_server.go:141] control plane version: v1.30.0
	I0420 01:30:46.794911  141927 api_server.go:131] duration metric: took 9.555146ms to wait for apiserver health ...
	I0420 01:30:46.794920  141927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:30:46.953036  141927 system_pods.go:59] 9 kube-system pods found
	I0420 01:30:46.953066  141927 system_pods.go:61] "coredns-7db6d8ff4d-g2nzn" [d07ba546-0251-4862-ad1b-0c3d5ee7b1f3] Running
	I0420 01:30:46.953070  141927 system_pods.go:61] "coredns-7db6d8ff4d-p8dhp" [4bf589b6-f54b-4615-b95e-b95c89766e24] Running
	I0420 01:30:46.953074  141927 system_pods.go:61] "etcd-default-k8s-diff-port-907988" [f2711b7c-9d31-4586-bcf0-345ef2c9e62a] Running
	I0420 01:30:46.953077  141927 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-907988" [7a4fccc8-90d5-4467-8925-df5d8e1e128a] Running
	I0420 01:30:46.953081  141927 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-907988" [68350b12-3244-4565-ab06-6d7ad5876935] Running
	I0420 01:30:46.953085  141927 system_pods.go:61] "kube-proxy-jt8wr" [a9ddf3ce-29f8-437d-bd31-89411c135012] Running
	I0420 01:30:46.953088  141927 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-907988" [f0ff044b-0c2a-4105-9373-34abfbf6b68a] Running
	I0420 01:30:46.953094  141927 system_pods.go:61] "metrics-server-569cc877fc-6rgpj" [70cba472-11c4-4604-a4ad-3575ccedf005] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:30:46.953098  141927 system_pods.go:61] "storage-provisioner" [739478ce-5d74-4be0-8a39-d80245d8aa8a] Running
	I0420 01:30:46.953108  141927 system_pods.go:74] duration metric: took 158.182751ms to wait for pod list to return data ...
	I0420 01:30:46.953116  141927 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:30:47.151205  141927 default_sa.go:45] found service account: "default"
	I0420 01:30:47.151245  141927 default_sa.go:55] duration metric: took 198.121475ms for default service account to be created ...
	I0420 01:30:47.151274  141927 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:30:47.354321  141927 system_pods.go:86] 9 kube-system pods found
	I0420 01:30:47.354348  141927 system_pods.go:89] "coredns-7db6d8ff4d-g2nzn" [d07ba546-0251-4862-ad1b-0c3d5ee7b1f3] Running
	I0420 01:30:47.354353  141927 system_pods.go:89] "coredns-7db6d8ff4d-p8dhp" [4bf589b6-f54b-4615-b95e-b95c89766e24] Running
	I0420 01:30:47.354358  141927 system_pods.go:89] "etcd-default-k8s-diff-port-907988" [f2711b7c-9d31-4586-bcf0-345ef2c9e62a] Running
	I0420 01:30:47.354364  141927 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-907988" [7a4fccc8-90d5-4467-8925-df5d8e1e128a] Running
	I0420 01:30:47.354369  141927 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-907988" [68350b12-3244-4565-ab06-6d7ad5876935] Running
	I0420 01:30:47.354373  141927 system_pods.go:89] "kube-proxy-jt8wr" [a9ddf3ce-29f8-437d-bd31-89411c135012] Running
	I0420 01:30:47.354376  141927 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-907988" [f0ff044b-0c2a-4105-9373-34abfbf6b68a] Running
	I0420 01:30:47.354383  141927 system_pods.go:89] "metrics-server-569cc877fc-6rgpj" [70cba472-11c4-4604-a4ad-3575ccedf005] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:30:47.354387  141927 system_pods.go:89] "storage-provisioner" [739478ce-5d74-4be0-8a39-d80245d8aa8a] Running
	I0420 01:30:47.354395  141927 system_pods.go:126] duration metric: took 203.115923ms to wait for k8s-apps to be running ...
	I0420 01:30:47.354403  141927 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:30:47.354452  141927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:47.370946  141927 system_svc.go:56] duration metric: took 16.532953ms WaitForService to wait for kubelet
	I0420 01:30:47.370977  141927 kubeadm.go:576] duration metric: took 4.245884115s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:30:47.370997  141927 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:30:47.550097  141927 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:30:47.550127  141927 node_conditions.go:123] node cpu capacity is 2
	I0420 01:30:47.550138  141927 node_conditions.go:105] duration metric: took 179.136105ms to run NodePressure ...
	I0420 01:30:47.550150  141927 start.go:240] waiting for startup goroutines ...
	I0420 01:30:47.550156  141927 start.go:245] waiting for cluster config update ...
	I0420 01:30:47.550167  141927 start.go:254] writing updated cluster config ...
	I0420 01:30:47.550493  141927 ssh_runner.go:195] Run: rm -f paused
	I0420 01:30:47.614715  141927 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:30:47.616658  141927 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-907988" cluster and "default" namespace by default
	I0420 01:30:47.623645  142057 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.736926697s)
	I0420 01:30:47.623716  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:30:47.648132  142057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:30:47.662521  142057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:30:47.674241  142057 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:30:47.674265  142057 kubeadm.go:156] found existing configuration files:
	
	I0420 01:30:47.674311  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:30:47.684981  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:30:47.685037  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:30:47.696549  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:30:47.706838  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:30:47.706885  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:30:47.717387  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:30:47.732194  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:30:47.732252  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:30:47.743425  142057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:30:47.756579  142057 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:30:47.756629  142057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:30:47.769210  142057 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:30:47.832909  142057 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:30:47.832972  142057 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:30:47.987090  142057 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:30:47.987209  142057 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:30:47.987380  142057 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:30:48.253287  142057 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:30:48.255451  142057 out.go:204]   - Generating certificates and keys ...
	I0420 01:30:48.255552  142057 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:30:48.255657  142057 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:30:48.255767  142057 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:30:48.255880  142057 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:30:48.255992  142057 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:30:48.256076  142057 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:30:48.256170  142057 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:30:48.256250  142057 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:30:48.256344  142057 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:30:48.256445  142057 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:30:48.256500  142057 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:30:48.256563  142057 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:30:48.346357  142057 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:30:48.602240  142057 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:30:48.741597  142057 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:30:49.086311  142057 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:30:49.284340  142057 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:30:49.284671  142057 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:30:49.287663  142057 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:30:46.540199  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:48.540848  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:50.541579  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:49.289305  142057 out.go:204]   - Booting up control plane ...
	I0420 01:30:49.289430  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:30:49.289558  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:30:49.289646  142057 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:30:49.309520  142057 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:30:49.311328  142057 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:30:49.311389  142057 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:30:49.448766  142057 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:30:49.448889  142057 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:30:49.950225  142057 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.460713ms
	I0420 01:30:49.950316  142057 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:30:55.452587  142057 kubeadm.go:309] [api-check] The API server is healthy after 5.502061843s
	I0420 01:30:55.466768  142057 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:30:55.500892  142057 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:30:55.538376  142057 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:30:55.538631  142057 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-269507 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:30:55.559344  142057 kubeadm.go:309] [bootstrap-token] Using token: jtn2hn.nnhc9vssv65463xy
	I0420 01:30:52.542748  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:55.040878  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:55.560872  142057 out.go:204]   - Configuring RBAC rules ...
	I0420 01:30:55.561022  142057 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:30:55.575617  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:30:55.583307  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:30:55.586398  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:30:55.596138  142057 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:30:55.599717  142057 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:30:55.861367  142057 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:30:56.310991  142057 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:30:56.860904  142057 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:30:56.860939  142057 kubeadm.go:309] 
	I0420 01:30:56.861051  142057 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:30:56.861077  142057 kubeadm.go:309] 
	I0420 01:30:56.861180  142057 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:30:56.861201  142057 kubeadm.go:309] 
	I0420 01:30:56.861232  142057 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:30:56.861345  142057 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:30:56.861438  142057 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:30:56.861454  142057 kubeadm.go:309] 
	I0420 01:30:56.861534  142057 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:30:56.861544  142057 kubeadm.go:309] 
	I0420 01:30:56.861628  142057 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:30:56.861644  142057 kubeadm.go:309] 
	I0420 01:30:56.861728  142057 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:30:56.861822  142057 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:30:56.861895  142057 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:30:56.861923  142057 kubeadm.go:309] 
	I0420 01:30:56.862120  142057 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:30:56.862228  142057 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:30:56.862246  142057 kubeadm.go:309] 
	I0420 01:30:56.862371  142057 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jtn2hn.nnhc9vssv65463xy \
	I0420 01:30:56.862532  142057 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:30:56.862571  142057 kubeadm.go:309] 	--control-plane 
	I0420 01:30:56.862580  142057 kubeadm.go:309] 
	I0420 01:30:56.862700  142057 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:30:56.862724  142057 kubeadm.go:309] 
	I0420 01:30:56.862827  142057 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jtn2hn.nnhc9vssv65463xy \
	I0420 01:30:56.862955  142057 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:30:56.863259  142057 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:30:56.863343  142057 cni.go:84] Creating CNI manager for ""
	I0420 01:30:56.863358  142057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:30:56.865193  142057 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:30:57.541555  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:00.040222  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:30:56.866515  142057 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:30:56.880013  142057 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:30:56.900677  142057 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:30:56.900773  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:56.900809  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-269507 minikube.k8s.io/updated_at=2024_04_20T01_30_56_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=embed-certs-269507 minikube.k8s.io/primary=true
	I0420 01:30:56.942362  142057 ops.go:34] apiserver oom_adj: -16
	I0420 01:30:57.124807  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:57.625201  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:58.125867  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:58.625845  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:59.124923  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:30:59.625004  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:00.125467  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:00.625081  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:01.125446  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.539751  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:04.540090  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:01.625279  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.125084  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:02.625048  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:03.125567  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:03.625428  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:04.125592  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:04.625874  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:05.125031  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:05.625698  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:06.125620  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.054009  142411 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:31:07.054375  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:07.054708  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:06.625682  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.125909  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:07.625563  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:08.125451  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:08.625265  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.125677  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.625433  142057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:31:09.720318  142057 kubeadm.go:1107] duration metric: took 12.81961115s to wait for elevateKubeSystemPrivileges
	W0420 01:31:09.720362  142057 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:31:09.720373  142057 kubeadm.go:393] duration metric: took 5m17.067399347s to StartCluster
	I0420 01:31:09.720426  142057 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:31:09.720552  142057 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:31:09.722646  142057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:31:09.722904  142057 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:31:09.724771  142057 out.go:177] * Verifying Kubernetes components...
	I0420 01:31:09.722979  142057 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:31:09.723175  142057 config.go:182] Loaded profile config "embed-certs-269507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:31:09.724863  142057 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-269507"
	I0420 01:31:09.726208  142057 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-269507"
	W0420 01:31:09.726229  142057 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:31:09.724870  142057 addons.go:69] Setting default-storageclass=true in profile "embed-certs-269507"
	I0420 01:31:09.726270  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.726289  142057 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-269507"
	I0420 01:31:09.724889  142057 addons.go:69] Setting metrics-server=true in profile "embed-certs-269507"
	I0420 01:31:09.726351  142057 addons.go:234] Setting addon metrics-server=true in "embed-certs-269507"
	W0420 01:31:09.726365  142057 addons.go:243] addon metrics-server should already be in state true
	I0420 01:31:09.726395  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.726159  142057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:31:09.726699  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726737  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726771  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.726785  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.726803  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.726793  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.742932  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41221
	I0420 01:31:09.743143  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I0420 01:31:09.743375  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.743666  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.743951  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.743968  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.744102  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.744120  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.744439  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.744497  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.745152  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.745162  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.745178  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.745195  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.745923  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40633
	I0420 01:31:09.746441  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.747173  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.747202  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.747637  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.747934  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.751736  142057 addons.go:234] Setting addon default-storageclass=true in "embed-certs-269507"
	W0420 01:31:09.751760  142057 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:31:09.751791  142057 host.go:66] Checking if "embed-certs-269507" exists ...
	I0420 01:31:09.752174  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.752199  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.763296  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40627
	I0420 01:31:09.763475  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41617
	I0420 01:31:09.764103  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.764119  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.764635  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.764656  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.764807  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.764821  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.765353  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.765369  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.765562  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.766352  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.767675  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.769455  142057 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:31:09.768866  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.770529  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:31:09.770596  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:31:09.770618  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.771959  142057 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:31:07.039635  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:09.040381  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:09.772109  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
	I0420 01:31:09.773531  142057 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:31:09.773545  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:31:09.773560  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.773989  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.774697  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.774711  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.774889  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.775069  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.775522  142057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:31:09.775550  142057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:31:09.775770  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.775840  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.775855  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.775973  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.776144  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.776283  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.776967  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.777306  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.777376  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.777621  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.777811  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.777949  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.778092  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.791609  142057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37301
	I0420 01:31:09.792008  142057 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:31:09.792475  142057 main.go:141] libmachine: Using API Version  1
	I0420 01:31:09.792492  142057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:31:09.792811  142057 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:31:09.793110  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetState
	I0420 01:31:09.794743  142057 main.go:141] libmachine: (embed-certs-269507) Calling .DriverName
	I0420 01:31:09.795008  142057 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:31:09.795023  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:31:09.795037  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHHostname
	I0420 01:31:09.797655  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.798120  142057 main.go:141] libmachine: (embed-certs-269507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:ba", ip: ""} in network mk-embed-certs-269507: {Iface:virbr4 ExpiryTime:2024-04-20 02:16:42 +0000 UTC Type:0 Mac:52:54:00:5d:0f:ba Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-269507 Clientid:01:52:54:00:5d:0f:ba}
	I0420 01:31:09.798144  142057 main.go:141] libmachine: (embed-certs-269507) DBG | domain embed-certs-269507 has defined IP address 192.168.50.184 and MAC address 52:54:00:5d:0f:ba in network mk-embed-certs-269507
	I0420 01:31:09.798394  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHPort
	I0420 01:31:09.798603  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHKeyPath
	I0420 01:31:09.798745  142057 main.go:141] libmachine: (embed-certs-269507) Calling .GetSSHUsername
	I0420 01:31:09.798888  142057 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/embed-certs-269507/id_rsa Username:docker}
	I0420 01:31:09.957088  142057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:31:10.012344  142057 node_ready.go:35] waiting up to 6m0s for node "embed-certs-269507" to be "Ready" ...
	I0420 01:31:10.023887  142057 node_ready.go:49] node "embed-certs-269507" has status "Ready":"True"
	I0420 01:31:10.023917  142057 node_ready.go:38] duration metric: took 11.536403ms for node "embed-certs-269507" to be "Ready" ...
	I0420 01:31:10.023929  142057 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:10.035096  142057 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:10.210022  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:31:10.222715  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:31:10.251807  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:31:10.251836  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:31:10.342638  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:31:10.342664  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:31:10.480676  142057 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:31:10.480700  142057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:31:10.655186  142057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:31:11.331066  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.121005107s)
	I0420 01:31:11.331125  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.108375538s)
	I0420 01:31:11.331139  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331152  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331165  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331181  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331530  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331601  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331611  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331641  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331664  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331681  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331684  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331692  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.331699  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.331646  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331932  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331959  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.331979  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.331991  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.331989  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.332003  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.364269  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.364296  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.364641  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.364667  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.364671  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.809229  142057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.154002194s)
	I0420 01:31:11.809282  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.809301  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.809618  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.809676  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.809688  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.809705  142057 main.go:141] libmachine: Making call to close driver server
	I0420 01:31:11.809717  142057 main.go:141] libmachine: (embed-certs-269507) Calling .Close
	I0420 01:31:11.809954  142057 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:31:11.809983  142057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:31:11.810001  142057 addons.go:470] Verifying addon metrics-server=true in "embed-certs-269507"
	I0420 01:31:11.810004  142057 main.go:141] libmachine: (embed-certs-269507) DBG | Closing plugin on server side
	I0420 01:31:11.811610  142057 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0420 01:31:12.055506  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:12.055793  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:11.813049  142057 addons.go:505] duration metric: took 2.090078148s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0420 01:31:12.044618  142057 pod_ready.go:102] pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:12.565519  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.565543  142057 pod_ready.go:81] duration metric: took 2.530392572s for pod "coredns-7db6d8ff4d-ltzhp" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.565552  142057 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.577986  142057 pod_ready.go:92] pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.578011  142057 pod_ready.go:81] duration metric: took 12.452506ms for pod "coredns-7db6d8ff4d-mpf5l" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.578020  142057 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.595104  142057 pod_ready.go:92] pod "etcd-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.595129  142057 pod_ready.go:81] duration metric: took 17.103577ms for pod "etcd-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.595139  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.602502  142057 pod_ready.go:92] pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.602524  142057 pod_ready.go:81] duration metric: took 7.377832ms for pod "kube-apiserver-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.602538  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.608443  142057 pod_ready.go:92] pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.608462  142057 pod_ready.go:81] duration metric: took 5.916781ms for pod "kube-controller-manager-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.608471  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4x66x" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.939418  142057 pod_ready.go:92] pod "kube-proxy-4x66x" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:12.939444  142057 pod_ready.go:81] duration metric: took 330.966964ms for pod "kube-proxy-4x66x" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:12.939454  142057 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:13.341528  142057 pod_ready.go:92] pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace has status "Ready":"True"
	I0420 01:31:13.341556  142057 pod_ready.go:81] duration metric: took 402.093841ms for pod "kube-scheduler-embed-certs-269507" in "kube-system" namespace to be "Ready" ...
	I0420 01:31:13.341565  142057 pod_ready.go:38] duration metric: took 3.317622631s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:13.341583  142057 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:31:13.341648  142057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:31:13.361938  142057 api_server.go:72] duration metric: took 3.638999445s to wait for apiserver process to appear ...
	I0420 01:31:13.361967  142057 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:31:13.361987  142057 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0420 01:31:13.367149  142057 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0420 01:31:13.368215  142057 api_server.go:141] control plane version: v1.30.0
	I0420 01:31:13.368243  142057 api_server.go:131] duration metric: took 6.268859ms to wait for apiserver health ...
	I0420 01:31:13.368254  142057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:31:13.545177  142057 system_pods.go:59] 9 kube-system pods found
	I0420 01:31:13.545203  142057 system_pods.go:61] "coredns-7db6d8ff4d-ltzhp" [fca2da30-b908-46fc-a028-d43a17c6307e] Running
	I0420 01:31:13.545207  142057 system_pods.go:61] "coredns-7db6d8ff4d-mpf5l" [331105fe-dd08-409f-9b2d-658b958cd1a2] Running
	I0420 01:31:13.545212  142057 system_pods.go:61] "etcd-embed-certs-269507" [7dc38a73-8614-42d0-afb5-f2ffdbb8ef1b] Running
	I0420 01:31:13.545215  142057 system_pods.go:61] "kube-apiserver-embed-certs-269507" [c6741448-01ad-4be4-a120-c69b27fbc818] Running
	I0420 01:31:13.545219  142057 system_pods.go:61] "kube-controller-manager-embed-certs-269507" [003fc040-4032-4ff8-99af-71305dae664c] Running
	I0420 01:31:13.545222  142057 system_pods.go:61] "kube-proxy-4x66x" [75da8306-56f8-49bf-a2e7-cf5d4877dc16] Running
	I0420 01:31:13.545224  142057 system_pods.go:61] "kube-scheduler-embed-certs-269507" [86a64ec5-dd53-4702-9dea-8dbab58b38e3] Running
	I0420 01:31:13.545230  142057 system_pods.go:61] "metrics-server-569cc877fc-jwbst" [4d13a078-f3cd-43c2-8f15-fe5c36445294] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:31:13.545233  142057 system_pods.go:61] "storage-provisioner" [8eee97ab-bb31-4a3d-be80-845b6545e897] Running
	I0420 01:31:13.545242  142057 system_pods.go:74] duration metric: took 176.980813ms to wait for pod list to return data ...
	I0420 01:31:13.545249  142057 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:31:13.739865  142057 default_sa.go:45] found service account: "default"
	I0420 01:31:13.739892  142057 default_sa.go:55] duration metric: took 194.636223ms for default service account to be created ...
	I0420 01:31:13.739903  142057 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:31:13.942758  142057 system_pods.go:86] 9 kube-system pods found
	I0420 01:31:13.942785  142057 system_pods.go:89] "coredns-7db6d8ff4d-ltzhp" [fca2da30-b908-46fc-a028-d43a17c6307e] Running
	I0420 01:31:13.942793  142057 system_pods.go:89] "coredns-7db6d8ff4d-mpf5l" [331105fe-dd08-409f-9b2d-658b958cd1a2] Running
	I0420 01:31:13.942801  142057 system_pods.go:89] "etcd-embed-certs-269507" [7dc38a73-8614-42d0-afb5-f2ffdbb8ef1b] Running
	I0420 01:31:13.942812  142057 system_pods.go:89] "kube-apiserver-embed-certs-269507" [c6741448-01ad-4be4-a120-c69b27fbc818] Running
	I0420 01:31:13.942819  142057 system_pods.go:89] "kube-controller-manager-embed-certs-269507" [003fc040-4032-4ff8-99af-71305dae664c] Running
	I0420 01:31:13.942829  142057 system_pods.go:89] "kube-proxy-4x66x" [75da8306-56f8-49bf-a2e7-cf5d4877dc16] Running
	I0420 01:31:13.942835  142057 system_pods.go:89] "kube-scheduler-embed-certs-269507" [86a64ec5-dd53-4702-9dea-8dbab58b38e3] Running
	I0420 01:31:13.942846  142057 system_pods.go:89] "metrics-server-569cc877fc-jwbst" [4d13a078-f3cd-43c2-8f15-fe5c36445294] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:31:13.942854  142057 system_pods.go:89] "storage-provisioner" [8eee97ab-bb31-4a3d-be80-845b6545e897] Running
	I0420 01:31:13.942863  142057 system_pods.go:126] duration metric: took 202.954629ms to wait for k8s-apps to be running ...
	I0420 01:31:13.942873  142057 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:31:13.942926  142057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:31:13.962754  142057 system_svc.go:56] duration metric: took 19.872903ms WaitForService to wait for kubelet
	I0420 01:31:13.962781  142057 kubeadm.go:576] duration metric: took 4.239850872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:31:13.962802  142057 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:31:14.139800  142057 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:31:14.139834  142057 node_conditions.go:123] node cpu capacity is 2
	I0420 01:31:14.139848  142057 node_conditions.go:105] duration metric: took 177.041675ms to run NodePressure ...
	I0420 01:31:14.139862  142057 start.go:240] waiting for startup goroutines ...
	I0420 01:31:14.139872  142057 start.go:245] waiting for cluster config update ...
	I0420 01:31:14.139886  142057 start.go:254] writing updated cluster config ...
	I0420 01:31:14.140201  142057 ssh_runner.go:195] Run: rm -f paused
	I0420 01:31:14.190985  142057 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:31:14.193207  142057 out.go:177] * Done! kubectl is now configured to use "embed-certs-269507" cluster and "default" namespace by default
	I0420 01:31:11.040724  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:13.043491  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:15.540182  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:17.540894  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:19.541858  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:22.056094  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:22.056315  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:22.039484  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:24.043137  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:26.043262  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:28.540379  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:30.540568  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:32.543371  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:35.040187  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:37.541354  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:40.039779  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:42.057024  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:31:42.057278  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:31:42.040147  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:44.540170  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:46.540576  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:48.543604  141746 pod_ready.go:102] pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace has status "Ready":"False"
	I0420 01:31:51.034230  141746 pod_ready.go:81] duration metric: took 4m0.001077028s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" ...
	E0420 01:31:51.034258  141746 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-lcbcz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0420 01:31:51.034280  141746 pod_ready.go:38] duration metric: took 4m12.046687249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:31:51.034308  141746 kubeadm.go:591] duration metric: took 4m55.947094434s to restartPrimaryControlPlane
	W0420 01:31:51.034367  141746 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0420 01:31:51.034400  141746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:32:22.058965  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:32:22.059213  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:32:22.059231  142411 kubeadm.go:309] 
	I0420 01:32:22.059284  142411 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:32:22.059341  142411 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:32:22.059351  142411 kubeadm.go:309] 
	I0420 01:32:22.059398  142411 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:32:22.059449  142411 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:32:22.059581  142411 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:32:22.059606  142411 kubeadm.go:309] 
	I0420 01:32:22.059693  142411 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:32:22.059725  142411 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:32:22.059796  142411 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:32:22.059821  142411 kubeadm.go:309] 
	I0420 01:32:22.059916  142411 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:32:22.060046  142411 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:32:22.060068  142411 kubeadm.go:309] 
	I0420 01:32:22.060225  142411 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:32:22.060371  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:32:22.060498  142411 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:32:22.060624  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:32:22.060643  142411 kubeadm.go:309] 
	I0420 01:32:22.061155  142411 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:22.061294  142411 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:32:22.061403  142411 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0420 01:32:22.061569  142411 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0420 01:32:22.061628  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0420 01:32:23.211059  142411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.149398853s)
	I0420 01:32:23.211147  142411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:23.228140  142411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:32:23.240832  142411 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:32:23.240868  142411 kubeadm.go:156] found existing configuration files:
	
	I0420 01:32:23.240912  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:32:23.252674  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:32:23.252735  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:32:23.264128  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:32:23.274998  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:32:23.275059  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:32:23.286449  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.297377  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:32:23.297452  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.308971  142411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:32:23.320775  142411 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:32:23.320842  142411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:32:23.333601  142411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:32:23.490252  141746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.455825605s)
	I0420 01:32:23.490330  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:23.515027  141746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 01:32:23.528835  141746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 01:32:23.542901  141746 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 01:32:23.542927  141746 kubeadm.go:156] found existing configuration files:
	
	I0420 01:32:23.542969  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 01:32:23.554931  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 01:32:23.555006  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 01:32:23.570665  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 01:32:23.583505  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 01:32:23.583576  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 01:32:23.595835  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.607468  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 01:32:23.607538  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 01:32:23.620629  141746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 01:32:23.634141  141746 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 01:32:23.634222  141746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 01:32:23.648360  141746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0420 01:32:23.727697  141746 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 01:32:23.727825  141746 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:32:23.899280  141746 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:32:23.899376  141746 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:32:23.899456  141746 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:32:24.139299  141746 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:32:24.141410  141746 out.go:204]   - Generating certificates and keys ...
	I0420 01:32:24.141522  141746 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:32:24.141618  141746 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:32:24.141719  141746 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:32:24.141814  141746 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:32:24.141912  141746 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:32:24.141987  141746 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:32:24.142076  141746 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:32:24.142172  141746 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:32:24.142348  141746 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:32:24.142589  141746 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:32:24.142757  141746 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:32:24.142990  141746 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:32:24.247270  141746 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:32:24.326535  141746 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 01:32:24.538489  141746 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:32:24.594810  141746 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:32:24.712812  141746 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:32:24.713304  141746 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:32:24.719376  141746 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:32:24.721510  141746 out.go:204]   - Booting up control plane ...
	I0420 01:32:24.721649  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:32:24.721781  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:32:24.722470  141746 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:32:24.748410  141746 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:32:24.750247  141746 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:32:24.750320  141746 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:32:24.906734  141746 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 01:32:24.906859  141746 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 01:32:25.409625  141746 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.844847ms
	I0420 01:32:25.409771  141746 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 01:32:23.603058  142411 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:30.912062  141746 kubeadm.go:309] [api-check] The API server is healthy after 5.502434175s
	I0420 01:32:30.935231  141746 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 01:32:30.954860  141746 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 01:32:30.990255  141746 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 01:32:30.990480  141746 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-338118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 01:32:31.004218  141746 kubeadm.go:309] [bootstrap-token] Using token: 6ub3et.0wyu42zodual4kt8
	I0420 01:32:31.005771  141746 out.go:204]   - Configuring RBAC rules ...
	I0420 01:32:31.005875  141746 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 01:32:31.011978  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 01:32:31.020750  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 01:32:31.024958  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 01:32:31.032499  141746 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 01:32:31.037128  141746 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 01:32:31.320324  141746 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 01:32:31.761773  141746 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 01:32:32.322540  141746 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 01:32:32.322563  141746 kubeadm.go:309] 
	I0420 01:32:32.322633  141746 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 01:32:32.322648  141746 kubeadm.go:309] 
	I0420 01:32:32.322728  141746 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 01:32:32.322737  141746 kubeadm.go:309] 
	I0420 01:32:32.322763  141746 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 01:32:32.322833  141746 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 01:32:32.322906  141746 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 01:32:32.322918  141746 kubeadm.go:309] 
	I0420 01:32:32.323005  141746 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 01:32:32.323015  141746 kubeadm.go:309] 
	I0420 01:32:32.323083  141746 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 01:32:32.323110  141746 kubeadm.go:309] 
	I0420 01:32:32.323184  141746 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 01:32:32.323304  141746 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 01:32:32.323362  141746 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 01:32:32.323372  141746 kubeadm.go:309] 
	I0420 01:32:32.323522  141746 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 01:32:32.323660  141746 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 01:32:32.323677  141746 kubeadm.go:309] 
	I0420 01:32:32.323765  141746 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6ub3et.0wyu42zodual4kt8 \
	I0420 01:32:32.323916  141746 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff \
	I0420 01:32:32.323948  141746 kubeadm.go:309] 	--control-plane 
	I0420 01:32:32.323957  141746 kubeadm.go:309] 
	I0420 01:32:32.324035  141746 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 01:32:32.324049  141746 kubeadm.go:309] 
	I0420 01:32:32.324201  141746 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6ub3et.0wyu42zodual4kt8 \
	I0420 01:32:32.324348  141746 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f0a50c4a6736b927f645cc5729b18acddc10382733abc1159a72bef443e87ff 
	I0420 01:32:32.324967  141746 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 01:32:32.325210  141746 cni.go:84] Creating CNI manager for ""
	I0420 01:32:32.325228  141746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0420 01:32:32.327624  141746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0420 01:32:32.329029  141746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0420 01:32:32.344181  141746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0420 01:32:32.368978  141746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 01:32:32.369052  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:32.369086  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-338118 minikube.k8s.io/updated_at=2024_04_20T01_32_32_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=no-preload-338118 minikube.k8s.io/primary=true
	I0420 01:32:32.579160  141746 ops.go:34] apiserver oom_adj: -16
	I0420 01:32:32.579218  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:33.079458  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:33.579498  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:34.079957  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:34.579520  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:35.079902  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:35.579955  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:36.079525  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:36.579612  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:37.079831  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:37.579989  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:38.079481  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:38.579798  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:39.080239  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:39.579654  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:40.080267  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:40.579837  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:41.079840  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:41.579347  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:42.079368  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:42.579641  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:43.079257  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:43.579647  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.079317  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.580002  141746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 01:32:44.698993  141746 kubeadm.go:1107] duration metric: took 12.330007154s to wait for elevateKubeSystemPrivileges
	W0420 01:32:44.699036  141746 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 01:32:44.699045  141746 kubeadm.go:393] duration metric: took 5m49.674421659s to StartCluster
	I0420 01:32:44.699064  141746 settings.go:142] acquiring lock: {Name:mkc5d2e666f6d4d16c663287de08a3984aa5ca8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:32:44.699166  141746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:32:44.700731  141746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/kubeconfig: {Name:mkd77eee241d71a065738070c48a18b173919ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:32:44.700982  141746 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:32:44.702752  141746 out.go:177] * Verifying Kubernetes components...
	I0420 01:32:44.701040  141746 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:32:44.701201  141746 config.go:182] Loaded profile config "no-preload-338118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:32:44.704065  141746 addons.go:69] Setting storage-provisioner=true in profile "no-preload-338118"
	I0420 01:32:44.704078  141746 addons.go:69] Setting metrics-server=true in profile "no-preload-338118"
	I0420 01:32:44.704077  141746 addons.go:69] Setting default-storageclass=true in profile "no-preload-338118"
	I0420 01:32:44.704099  141746 addons.go:234] Setting addon storage-provisioner=true in "no-preload-338118"
	W0420 01:32:44.704105  141746 addons.go:243] addon storage-provisioner should already be in state true
	I0420 01:32:44.704114  141746 addons.go:234] Setting addon metrics-server=true in "no-preload-338118"
	I0420 01:32:44.704113  141746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-338118"
	W0420 01:32:44.704124  141746 addons.go:243] addon metrics-server should already be in state true
	I0420 01:32:44.704151  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.704157  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.704069  141746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:32:44.704452  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704485  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.704503  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704521  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.704535  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.704545  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.720663  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34001
	I0420 01:32:44.720685  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0420 01:32:44.721210  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.721222  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.721746  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.721766  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.721901  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.721925  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.722282  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.722311  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.722860  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.722860  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.722889  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.722914  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.723194  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39919
	I0420 01:32:44.723775  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.724401  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.724427  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.724790  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.724975  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.728728  141746 addons.go:234] Setting addon default-storageclass=true in "no-preload-338118"
	W0420 01:32:44.728751  141746 addons.go:243] addon default-storageclass should already be in state true
	I0420 01:32:44.728780  141746 host.go:66] Checking if "no-preload-338118" exists ...
	I0420 01:32:44.729136  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.729161  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.738505  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37139
	I0420 01:32:44.738893  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.739388  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.739409  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.739916  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.740120  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.741929  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37217
	I0420 01:32:44.742090  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.744131  141746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 01:32:44.742538  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.745561  141746 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:32:44.745579  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 01:32:44.745597  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.744662  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.745640  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.745994  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.746345  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.747491  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0420 01:32:44.747878  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.748594  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.748731  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.748752  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.750445  141746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0420 01:32:44.749050  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.749380  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.749990  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.752010  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 01:32:44.752029  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 01:32:44.752046  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.752131  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.752155  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.752307  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.752479  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.752647  141746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 01:32:44.752676  141746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 01:32:44.752676  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.754727  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.755188  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.755216  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.755497  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.755696  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.755866  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.756034  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.768442  141746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I0420 01:32:44.768887  141746 main.go:141] libmachine: () Calling .GetVersion
	I0420 01:32:44.769453  141746 main.go:141] libmachine: Using API Version  1
	I0420 01:32:44.769473  141746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 01:32:44.769852  141746 main.go:141] libmachine: () Calling .GetMachineName
	I0420 01:32:44.770359  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetState
	I0420 01:32:44.772155  141746 main.go:141] libmachine: (no-preload-338118) Calling .DriverName
	I0420 01:32:44.772443  141746 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 01:32:44.772651  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 01:32:44.772686  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHHostname
	I0420 01:32:44.775775  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.776177  141746 main.go:141] libmachine: (no-preload-338118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:65:26", ip: ""} in network mk-no-preload-338118: {Iface:virbr3 ExpiryTime:2024-04-20 02:26:24 +0000 UTC Type:0 Mac:52:54:00:14:65:26 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:no-preload-338118 Clientid:01:52:54:00:14:65:26}
	I0420 01:32:44.776205  141746 main.go:141] libmachine: (no-preload-338118) DBG | domain no-preload-338118 has defined IP address 192.168.72.89 and MAC address 52:54:00:14:65:26 in network mk-no-preload-338118
	I0420 01:32:44.776313  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHPort
	I0420 01:32:44.776492  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHKeyPath
	I0420 01:32:44.776667  141746 main.go:141] libmachine: (no-preload-338118) Calling .GetSSHUsername
	I0420 01:32:44.776832  141746 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/no-preload-338118/id_rsa Username:docker}
	I0420 01:32:44.930301  141746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:32:44.948472  141746 node_ready.go:35] waiting up to 6m0s for node "no-preload-338118" to be "Ready" ...
	I0420 01:32:44.960637  141746 node_ready.go:49] node "no-preload-338118" has status "Ready":"True"
	I0420 01:32:44.960664  141746 node_ready.go:38] duration metric: took 12.15407ms for node "no-preload-338118" to be "Ready" ...
	I0420 01:32:44.960676  141746 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:32:44.971143  141746 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.980894  141746 pod_ready.go:92] pod "etcd-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:44.980917  141746 pod_ready.go:81] duration metric: took 9.749994ms for pod "etcd-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.980929  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.995192  141746 pod_ready.go:92] pod "kube-apiserver-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:44.995217  141746 pod_ready.go:81] duration metric: took 14.279681ms for pod "kube-apiserver-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:44.995229  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.004302  141746 pod_ready.go:92] pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:45.004324  141746 pod_ready.go:81] duration metric: took 9.086713ms for pod "kube-controller-manager-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.004338  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f57d9" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:45.062482  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 01:32:45.066314  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 01:32:45.066334  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0420 01:32:45.093830  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 01:32:45.148558  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 01:32:45.148600  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 01:32:45.235321  141746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:32:45.235349  141746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 01:32:45.275661  141746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 01:32:46.686292  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.592425062s)
	I0420 01:32:46.686344  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.623774979s)
	I0420 01:32:46.686360  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686375  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686385  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686401  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686822  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.686897  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.686911  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.686920  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686835  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.686839  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687001  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.687013  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.687027  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.686850  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.687153  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687166  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.687359  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.687373  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.697988  141746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.422274698s)
	I0420 01:32:46.698045  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.698059  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.698320  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.698339  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.698351  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.698359  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.698568  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.698658  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.698676  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.698687  141746 addons.go:470] Verifying addon metrics-server=true in "no-preload-338118"
	I0420 01:32:46.733170  141746 main.go:141] libmachine: Making call to close driver server
	I0420 01:32:46.733198  141746 main.go:141] libmachine: (no-preload-338118) Calling .Close
	I0420 01:32:46.733551  141746 main.go:141] libmachine: Successfully made call to close driver server
	I0420 01:32:46.733573  141746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0420 01:32:46.733605  141746 main.go:141] libmachine: (no-preload-338118) DBG | Closing plugin on server side
	I0420 01:32:46.735297  141746 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0420 01:32:46.736665  141746 addons.go:505] duration metric: took 2.035625149s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0420 01:32:47.011271  141746 pod_ready.go:92] pod "kube-proxy-f57d9" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:47.011299  141746 pod_ready.go:81] duration metric: took 2.006954798s for pod "kube-proxy-f57d9" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.011309  141746 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.025378  141746 pod_ready.go:92] pod "kube-scheduler-no-preload-338118" in "kube-system" namespace has status "Ready":"True"
	I0420 01:32:47.025408  141746 pod_ready.go:81] duration metric: took 14.090474ms for pod "kube-scheduler-no-preload-338118" in "kube-system" namespace to be "Ready" ...
	I0420 01:32:47.025421  141746 pod_ready.go:38] duration metric: took 2.064731781s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:32:47.025443  141746 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:32:47.025511  141746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:32:47.052680  141746 api_server.go:72] duration metric: took 2.351656586s to wait for apiserver process to appear ...
	I0420 01:32:47.052712  141746 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:32:47.052738  141746 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I0420 01:32:47.061908  141746 api_server.go:279] https://192.168.72.89:8443/healthz returned 200:
	ok
	I0420 01:32:47.065615  141746 api_server.go:141] control plane version: v1.30.0
	I0420 01:32:47.065641  141746 api_server.go:131] duration metric: took 12.920384ms to wait for apiserver health ...
	I0420 01:32:47.065651  141746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:32:47.158039  141746 system_pods.go:59] 9 kube-system pods found
	I0420 01:32:47.158076  141746 system_pods.go:61] "coredns-7db6d8ff4d-8jvsz" [d83784a0-6942-4906-ba66-76d7fa25dc04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.158087  141746 system_pods.go:61] "coredns-7db6d8ff4d-lhnxg" [c0fb3119-abcb-4646-9aae-a54438a76adf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.158096  141746 system_pods.go:61] "etcd-no-preload-338118" [1ff1cf84-276b-45c4-9da9-8266ee15a4f6] Running
	I0420 01:32:47.158101  141746 system_pods.go:61] "kube-apiserver-no-preload-338118" [313150c1-d21e-43d5-8ae0-6331e5007a66] Running
	I0420 01:32:47.158107  141746 system_pods.go:61] "kube-controller-manager-no-preload-338118" [eef34e56-ed71-4e76-a732-341878f3f90d] Running
	I0420 01:32:47.158113  141746 system_pods.go:61] "kube-proxy-f57d9" [54252f52-9bb1-48a2-98e1-980f40fa727d] Running
	I0420 01:32:47.158117  141746 system_pods.go:61] "kube-scheduler-no-preload-338118" [4491c2f0-7b45-4c78-b91e-8fcbbcc890fd] Running
	I0420 01:32:47.158126  141746 system_pods.go:61] "metrics-server-569cc877fc-xbwdm" [798c7b61-a93d-4daf-a832-e15056a2ae24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:32:47.158134  141746 system_pods.go:61] "storage-provisioner" [51c12418-805f-4923-b7ab-4fa0fe07ec9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:32:47.158147  141746 system_pods.go:74] duration metric: took 92.489697ms to wait for pod list to return data ...
	I0420 01:32:47.158162  141746 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:32:47.351962  141746 default_sa.go:45] found service account: "default"
	I0420 01:32:47.352002  141746 default_sa.go:55] duration metric: took 193.830142ms for default service account to be created ...
	I0420 01:32:47.352016  141746 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:32:47.557471  141746 system_pods.go:86] 9 kube-system pods found
	I0420 01:32:47.557511  141746 system_pods.go:89] "coredns-7db6d8ff4d-8jvsz" [d83784a0-6942-4906-ba66-76d7fa25dc04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.557524  141746 system_pods.go:89] "coredns-7db6d8ff4d-lhnxg" [c0fb3119-abcb-4646-9aae-a54438a76adf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0420 01:32:47.557534  141746 system_pods.go:89] "etcd-no-preload-338118" [1ff1cf84-276b-45c4-9da9-8266ee15a4f6] Running
	I0420 01:32:47.557540  141746 system_pods.go:89] "kube-apiserver-no-preload-338118" [313150c1-d21e-43d5-8ae0-6331e5007a66] Running
	I0420 01:32:47.557547  141746 system_pods.go:89] "kube-controller-manager-no-preload-338118" [eef34e56-ed71-4e76-a732-341878f3f90d] Running
	I0420 01:32:47.557554  141746 system_pods.go:89] "kube-proxy-f57d9" [54252f52-9bb1-48a2-98e1-980f40fa727d] Running
	I0420 01:32:47.557564  141746 system_pods.go:89] "kube-scheduler-no-preload-338118" [4491c2f0-7b45-4c78-b91e-8fcbbcc890fd] Running
	I0420 01:32:47.557577  141746 system_pods.go:89] "metrics-server-569cc877fc-xbwdm" [798c7b61-a93d-4daf-a832-e15056a2ae24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0420 01:32:47.557589  141746 system_pods.go:89] "storage-provisioner" [51c12418-805f-4923-b7ab-4fa0fe07ec9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0420 01:32:47.557602  141746 system_pods.go:126] duration metric: took 205.577946ms to wait for k8s-apps to be running ...
	I0420 01:32:47.557615  141746 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:32:47.557674  141746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:32:47.577745  141746 system_svc.go:56] duration metric: took 20.111982ms WaitForService to wait for kubelet
	I0420 01:32:47.577774  141746 kubeadm.go:576] duration metric: took 2.876759476s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:32:47.577794  141746 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:32:47.753216  141746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0420 01:32:47.753246  141746 node_conditions.go:123] node cpu capacity is 2
	I0420 01:32:47.753257  141746 node_conditions.go:105] duration metric: took 175.457668ms to run NodePressure ...
	I0420 01:32:47.753269  141746 start.go:240] waiting for startup goroutines ...
	I0420 01:32:47.753275  141746 start.go:245] waiting for cluster config update ...
	I0420 01:32:47.753286  141746 start.go:254] writing updated cluster config ...
	I0420 01:32:47.753612  141746 ssh_runner.go:195] Run: rm -f paused
	I0420 01:32:47.804681  141746 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:32:47.806823  141746 out.go:177] * Done! kubectl is now configured to use "no-preload-338118" cluster and "default" namespace by default
	I0420 01:34:20.028550  142411 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0420 01:34:20.028769  142411 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0420 01:34:20.030749  142411 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0420 01:34:20.030826  142411 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 01:34:20.030947  142411 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 01:34:20.031078  142411 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 01:34:20.031217  142411 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 01:34:20.031319  142411 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 01:34:20.032927  142411 out.go:204]   - Generating certificates and keys ...
	I0420 01:34:20.033024  142411 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 01:34:20.033110  142411 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 01:34:20.033211  142411 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0420 01:34:20.033286  142411 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0420 01:34:20.033410  142411 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0420 01:34:20.033496  142411 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0420 01:34:20.033597  142411 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0420 01:34:20.033695  142411 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0420 01:34:20.033805  142411 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0420 01:34:20.033921  142411 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0420 01:34:20.033972  142411 kubeadm.go:309] [certs] Using the existing "sa" key
	I0420 01:34:20.034042  142411 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 01:34:20.034125  142411 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 01:34:20.034200  142411 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 01:34:20.034287  142411 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 01:34:20.034355  142411 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 01:34:20.034510  142411 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 01:34:20.034614  142411 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 01:34:20.034680  142411 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 01:34:20.034760  142411 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 01:34:20.036300  142411 out.go:204]   - Booting up control plane ...
	I0420 01:34:20.036380  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 01:34:20.036479  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 01:34:20.036583  142411 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 01:34:20.036705  142411 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 01:34:20.036888  142411 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0420 01:34:20.036955  142411 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0420 01:34:20.037046  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037228  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037291  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037494  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037576  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037730  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.037789  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.037977  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.038044  142411 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0420 01:34:20.038262  142411 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0420 01:34:20.038284  142411 kubeadm.go:309] 
	I0420 01:34:20.038341  142411 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0420 01:34:20.038382  142411 kubeadm.go:309] 		timed out waiting for the condition
	I0420 01:34:20.038396  142411 kubeadm.go:309] 
	I0420 01:34:20.038443  142411 kubeadm.go:309] 	This error is likely caused by:
	I0420 01:34:20.038476  142411 kubeadm.go:309] 		- The kubelet is not running
	I0420 01:34:20.038612  142411 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0420 01:34:20.038625  142411 kubeadm.go:309] 
	I0420 01:34:20.038735  142411 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0420 01:34:20.038767  142411 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0420 01:34:20.038794  142411 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0420 01:34:20.038808  142411 kubeadm.go:309] 
	I0420 01:34:20.038902  142411 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0420 01:34:20.038977  142411 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0420 01:34:20.038987  142411 kubeadm.go:309] 
	I0420 01:34:20.039101  142411 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0420 01:34:20.039203  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0420 01:34:20.039274  142411 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0420 01:34:20.039342  142411 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0420 01:34:20.039384  142411 kubeadm.go:309] 
	I0420 01:34:20.039417  142411 kubeadm.go:393] duration metric: took 8m0.622979268s to StartCluster
	I0420 01:34:20.039459  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:34:20.039514  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:34:20.090236  142411 cri.go:89] found id: ""
	I0420 01:34:20.090262  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.090270  142411 logs.go:278] No container was found matching "kube-apiserver"
	I0420 01:34:20.090276  142411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:34:20.090331  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:34:20.133841  142411 cri.go:89] found id: ""
	I0420 01:34:20.133867  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.133875  142411 logs.go:278] No container was found matching "etcd"
	I0420 01:34:20.133883  142411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:34:20.133955  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:34:20.176186  142411 cri.go:89] found id: ""
	I0420 01:34:20.176219  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.176230  142411 logs.go:278] No container was found matching "coredns"
	I0420 01:34:20.176235  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:34:20.176295  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:34:20.214895  142411 cri.go:89] found id: ""
	I0420 01:34:20.214932  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.214944  142411 logs.go:278] No container was found matching "kube-scheduler"
	I0420 01:34:20.214951  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:34:20.215018  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:34:20.257759  142411 cri.go:89] found id: ""
	I0420 01:34:20.257786  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.257795  142411 logs.go:278] No container was found matching "kube-proxy"
	I0420 01:34:20.257800  142411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:34:20.257857  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:34:20.298111  142411 cri.go:89] found id: ""
	I0420 01:34:20.298153  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.298164  142411 logs.go:278] No container was found matching "kube-controller-manager"
	I0420 01:34:20.298172  142411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:34:20.298226  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:34:20.333435  142411 cri.go:89] found id: ""
	I0420 01:34:20.333469  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.333481  142411 logs.go:278] No container was found matching "kindnet"
	I0420 01:34:20.333489  142411 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0420 01:34:20.333554  142411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0420 01:34:20.370848  142411 cri.go:89] found id: ""
	I0420 01:34:20.370872  142411 logs.go:276] 0 containers: []
	W0420 01:34:20.370880  142411 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0420 01:34:20.370890  142411 logs.go:123] Gathering logs for kubelet ...
	I0420 01:34:20.370902  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:34:20.425495  142411 logs.go:123] Gathering logs for dmesg ...
	I0420 01:34:20.425536  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:34:20.442039  142411 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:34:20.442066  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0420 01:34:20.523456  142411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0420 01:34:20.523483  142411 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:34:20.523504  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:34:20.633387  142411 logs.go:123] Gathering logs for container status ...
	I0420 01:34:20.633427  142411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0420 01:34:20.688731  142411 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0420 01:34:20.688783  142411 out.go:239] * 
	W0420 01:34:20.688839  142411 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:34:20.688862  142411 out.go:239] * 
	W0420 01:34:20.689758  142411 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0420 01:34:20.693376  142411 out.go:177] 
	W0420 01:34:20.694909  142411 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0420 01:34:20.694971  142411 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0420 01:34:20.695003  142411 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0420 01:34:20.696409  142411 out.go:177] 
	
	
	==> CRI-O <==
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.387662406Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577469387635552,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e3aec39-6a6c-443e-9393-00f5af7cac4e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.388597420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fae58af-9b56-4d3d-9feb-fcf83e784a7a name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.388697890Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fae58af-9b56-4d3d-9feb-fcf83e784a7a name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.388734044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0fae58af-9b56-4d3d-9feb-fcf83e784a7a name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.422792885Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab614987-e8a8-416a-b292-cc45ba105856 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.422951235Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab614987-e8a8-416a-b292-cc45ba105856 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.424671652Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7111d2de-d5f9-4113-b637-4d9581783ee2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.425147312Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577469425122901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7111d2de-d5f9-4113-b637-4d9581783ee2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.426087099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a9a78a5-7a02-44a3-b19f-29e98edca7de name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.426160364Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a9a78a5-7a02-44a3-b19f-29e98edca7de name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.426204131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3a9a78a5-7a02-44a3-b19f-29e98edca7de name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.466592764Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=666fc5a4-5ea9-4add-be7b-dbe46716bfc4 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.466700596Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=666fc5a4-5ea9-4add-be7b-dbe46716bfc4 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.468177987Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be436e5d-3cd3-49c1-9496-6566e6e93e7a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.468584730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577469468563289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be436e5d-3cd3-49c1-9496-6566e6e93e7a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.469418490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27deb6d6-20a7-435f-9ed5-454a146575df name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.469494492Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27deb6d6-20a7-435f-9ed5-454a146575df name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.469544533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=27deb6d6-20a7-435f-9ed5-454a146575df name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.507045436Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42d51793-b72f-4558-bd80-dd14c788d366 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.507177005Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42d51793-b72f-4558-bd80-dd14c788d366 name=/runtime.v1.RuntimeService/Version
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.508335253Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4157b770-cd6d-4470-8504-9c1963f465f7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.508827357Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713577469508805418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4157b770-cd6d-4470-8504-9c1963f465f7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.509368462Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abdc4dc2-f6e6-4f7e-bba5-fd7f9b76fb5c name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.509452143Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abdc4dc2-f6e6-4f7e-bba5-fd7f9b76fb5c name=/runtime.v1.RuntimeService/ListContainers
	Apr 20 01:44:29 old-k8s-version-564860 crio[649]: time="2024-04-20 01:44:29.509491128Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=abdc4dc2-f6e6-4f7e-bba5-fd7f9b76fb5c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr20 01:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057920] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044405] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.872024] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.695018] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Apr20 01:26] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.212298] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.068714] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074791] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.229235] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.132751] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.310058] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +7.263827] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.070157] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.032326] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[  +8.834390] kauditd_printk_skb: 46 callbacks suppressed
	[Apr20 01:30] systemd-fstab-generator[5001]: Ignoring "noauto" option for root device
	[Apr20 01:32] systemd-fstab-generator[5277]: Ignoring "noauto" option for root device
	[  +0.067931] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:44:29 up 18 min,  0 users,  load average: 0.24, 0.10, 0.09
	Linux old-k8s-version-564860 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6655]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc000baa480)
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6655]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6655]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6655]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6655]: goroutine 157 [select]:
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6655]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bbdef0, 0x4f0ac20, 0xc0003dadc0, 0x1, 0xc0001020c0)
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6655]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6655]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00024f180, 0xc0001020c0)
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6655]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6655]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6655]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6655]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b52a20, 0xc000b94b40)
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6655]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6655]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6655]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 20 01:44:28 old-k8s-version-564860 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 20 01:44:28 old-k8s-version-564860 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 20 01:44:28 old-k8s-version-564860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 125.
	Apr 20 01:44:28 old-k8s-version-564860 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 20 01:44:28 old-k8s-version-564860 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6682]: I0420 01:44:28.913309    6682 server.go:416] Version: v1.20.0
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6682]: I0420 01:44:28.914142    6682 server.go:837] Client rotation is on, will bootstrap in background
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6682]: I0420 01:44:28.917071    6682 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6682]: W0420 01:44:28.918259    6682 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 20 01:44:28 old-k8s-version-564860 kubelet[6682]: I0420 01:44:28.918346    6682 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-564860 -n old-k8s-version-564860
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-564860 -n old-k8s-version-564860: exit status 2 (271.049231ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-564860" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (63.49s)

                                                
                                    

Test pass (243/311)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.14
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.0/json-events 5.55
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.07
18 TestDownloadOnly/v1.30.0/DeleteAll 0.13
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 116.91
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 185.02
29 TestAddons/parallel/Registry 16.17
31 TestAddons/parallel/InspektorGadget 10.99
33 TestAddons/parallel/HelmTiller 15.67
35 TestAddons/parallel/CSI 64.92
36 TestAddons/parallel/Headlamp 13.2
37 TestAddons/parallel/CloudSpanner 6.61
38 TestAddons/parallel/LocalPath 51.54
39 TestAddons/parallel/NvidiaDevicePlugin 5.67
40 TestAddons/parallel/Yakd 6.01
43 TestAddons/serial/GCPAuth/Namespaces 0.12
45 TestCertOptions 66.18
46 TestCertExpiration 334.37
48 TestForceSystemdFlag 103.54
49 TestForceSystemdEnv 46.22
51 TestKVMDriverInstallOrUpdate 1.27
55 TestErrorSpam/setup 45.26
56 TestErrorSpam/start 0.38
57 TestErrorSpam/status 0.78
58 TestErrorSpam/pause 1.62
59 TestErrorSpam/unpause 1.69
60 TestErrorSpam/stop 6.32
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 97.06
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 52.67
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.06
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.21
72 TestFunctional/serial/CacheCmd/cache/add_local 1.1
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
77 TestFunctional/serial/CacheCmd/cache/delete 0.12
78 TestFunctional/serial/MinikubeKubectlCmd 0.11
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
80 TestFunctional/serial/ExtraConfig 35.16
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.59
83 TestFunctional/serial/LogsFileCmd 1.64
84 TestFunctional/serial/InvalidService 4.35
86 TestFunctional/parallel/ConfigCmd 0.46
87 TestFunctional/parallel/DashboardCmd 44.02
88 TestFunctional/parallel/DryRun 0.32
89 TestFunctional/parallel/InternationalLanguage 0.16
90 TestFunctional/parallel/StatusCmd 1.05
94 TestFunctional/parallel/ServiceCmdConnect 12.63
95 TestFunctional/parallel/AddonsCmd 0.16
96 TestFunctional/parallel/PersistentVolumeClaim 43.49
98 TestFunctional/parallel/SSHCmd 0.45
99 TestFunctional/parallel/CpCmd 1.48
100 TestFunctional/parallel/MySQL 31.71
101 TestFunctional/parallel/FileSync 0.27
102 TestFunctional/parallel/CertSync 1.41
106 TestFunctional/parallel/NodeLabels 0.1
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
110 TestFunctional/parallel/License 0.19
120 TestFunctional/parallel/ServiceCmd/DeployApp 12.21
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.3
122 TestFunctional/parallel/ProfileCmd/profile_list 0.35
123 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
124 TestFunctional/parallel/MountCmd/any-port 7.62
125 TestFunctional/parallel/MountCmd/specific-port 1.99
126 TestFunctional/parallel/MountCmd/VerifyCleanup 0.8
127 TestFunctional/parallel/ServiceCmd/List 0.4
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
130 TestFunctional/parallel/ServiceCmd/Format 0.33
131 TestFunctional/parallel/ServiceCmd/URL 0.34
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
135 TestFunctional/parallel/Version/short 0.06
136 TestFunctional/parallel/Version/components 0.79
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.12
142 TestFunctional/parallel/ImageCommands/Setup 1.22
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 7.06
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.14
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.95
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.3
150 TestFunctional/delete_addon-resizer_images 0.07
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.01
156 TestMultiControlPlane/serial/StartCluster 204.3
157 TestMultiControlPlane/serial/DeployApp 4.7
158 TestMultiControlPlane/serial/PingHostFromPods 1.35
159 TestMultiControlPlane/serial/AddWorkerNode 49.11
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.56
162 TestMultiControlPlane/serial/CopyFile 13.49
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.51
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.41
168 TestMultiControlPlane/serial/DeleteSecondaryNode 17.54
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
171 TestMultiControlPlane/serial/RestartCluster 383.33
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.42
173 TestMultiControlPlane/serial/AddSecondaryNode 73.68
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.58
178 TestJSONOutput/start/Command 59.99
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.77
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.68
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.43
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.21
206 TestMainNoArgs 0.06
207 TestMinikubeProfile 93.1
210 TestMountStart/serial/StartWithMountFirst 25.94
211 TestMountStart/serial/VerifyMountFirst 0.39
212 TestMountStart/serial/StartWithMountSecond 27.59
213 TestMountStart/serial/VerifyMountSecond 0.39
214 TestMountStart/serial/DeleteFirst 0.7
215 TestMountStart/serial/VerifyMountPostDelete 0.39
216 TestMountStart/serial/Stop 1.4
217 TestMountStart/serial/RestartStopped 21.91
218 TestMountStart/serial/VerifyMountPostStop 0.39
221 TestMultiNode/serial/FreshStart2Nodes 109.29
222 TestMultiNode/serial/DeployApp2Nodes 4.24
223 TestMultiNode/serial/PingHostFrom2Pods 0.86
224 TestMultiNode/serial/AddNode 42.11
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.22
227 TestMultiNode/serial/CopyFile 7.54
228 TestMultiNode/serial/StopNode 3.16
229 TestMultiNode/serial/StartAfterStop 27.52
231 TestMultiNode/serial/DeleteNode 2.41
233 TestMultiNode/serial/RestartMultiNode 172.47
234 TestMultiNode/serial/ValidateNameConflict 47.24
241 TestScheduledStopUnix 116.16
245 TestRunningBinaryUpgrade 176.65
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
254 TestNoKubernetes/serial/StartWithK8s 96.56
259 TestNetworkPlugins/group/false 3.36
263 TestNoKubernetes/serial/StartWithStopK8s 43.07
264 TestNoKubernetes/serial/Start 50.24
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
266 TestNoKubernetes/serial/ProfileList 0.84
267 TestNoKubernetes/serial/Stop 1.48
268 TestNoKubernetes/serial/StartNoArgs 68.23
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
270 TestStoppedBinaryUpgrade/Setup 0.5
271 TestStoppedBinaryUpgrade/Upgrade 102.45
280 TestPause/serial/Start 112.09
281 TestNetworkPlugins/group/auto/Start 124.06
282 TestStoppedBinaryUpgrade/MinikubeLogs 1.09
283 TestNetworkPlugins/group/bridge/Start 125.92
285 TestNetworkPlugins/group/auto/KubeletFlags 0.22
286 TestNetworkPlugins/group/auto/NetCatPod 12.22
287 TestNetworkPlugins/group/auto/DNS 0.17
288 TestNetworkPlugins/group/auto/Localhost 0.13
289 TestNetworkPlugins/group/auto/HairPin 0.13
290 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
291 TestNetworkPlugins/group/bridge/NetCatPod 12.35
292 TestNetworkPlugins/group/enable-default-cni/Start 115.63
293 TestNetworkPlugins/group/kindnet/Start 110.05
294 TestNetworkPlugins/group/bridge/DNS 0.15
295 TestNetworkPlugins/group/bridge/Localhost 0.13
296 TestNetworkPlugins/group/bridge/HairPin 0.13
297 TestNetworkPlugins/group/flannel/Start 132.55
298 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
299 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
300 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.33
301 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
302 TestNetworkPlugins/group/kindnet/NetCatPod 12.41
303 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
304 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
305 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
306 TestNetworkPlugins/group/calico/Start 99.76
307 TestNetworkPlugins/group/kindnet/DNS 0.2
308 TestNetworkPlugins/group/kindnet/Localhost 0.16
309 TestNetworkPlugins/group/kindnet/HairPin 0.16
310 TestNetworkPlugins/group/custom-flannel/Start 106.85
313 TestNetworkPlugins/group/flannel/ControllerPod 6.01
314 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
315 TestNetworkPlugins/group/flannel/NetCatPod 12.36
316 TestNetworkPlugins/group/flannel/DNS 0.19
317 TestNetworkPlugins/group/flannel/Localhost 0.17
318 TestNetworkPlugins/group/flannel/HairPin 0.18
320 TestStartStop/group/no-preload/serial/FirstStart 153.63
321 TestNetworkPlugins/group/calico/ControllerPod 5.04
322 TestNetworkPlugins/group/calico/KubeletFlags 0.29
323 TestNetworkPlugins/group/calico/NetCatPod 12.7
324 TestNetworkPlugins/group/calico/DNS 0.16
325 TestNetworkPlugins/group/calico/Localhost 0.14
326 TestNetworkPlugins/group/calico/HairPin 0.14
327 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
328 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.24
330 TestStartStop/group/embed-certs/serial/FirstStart 106.41
331 TestNetworkPlugins/group/custom-flannel/DNS 0.22
332 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
333 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
335 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 74.33
336 TestStartStop/group/no-preload/serial/DeployApp 7.27
337 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
339 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
340 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
342 TestStartStop/group/embed-certs/serial/DeployApp 8.28
343 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.03
348 TestStartStop/group/no-preload/serial/SecondStart 742.48
351 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 608.95
352 TestStartStop/group/embed-certs/serial/SecondStart 623.25
353 TestStartStop/group/old-k8s-version/serial/Stop 1.53
354 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
365 TestStartStop/group/newest-cni/serial/FirstStart 60.62
366 TestStartStop/group/newest-cni/serial/DeployApp 0
367 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.21
368 TestStartStop/group/newest-cni/serial/Stop 11.38
369 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
370 TestStartStop/group/newest-cni/serial/SecondStart 38.65
371 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
374 TestStartStop/group/newest-cni/serial/Pause 3.27
x
+
TestDownloadOnly/v1.20.0/json-events (9.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-347670 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-347670 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.138453425s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-347670
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-347670: exit status 85 (71.142898ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-347670 | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC |          |
	|         | -p download-only-347670        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 23:57:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 23:57:06.588769   83754 out.go:291] Setting OutFile to fd 1 ...
	I0419 23:57:06.588917   83754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 23:57:06.588929   83754 out.go:304] Setting ErrFile to fd 2...
	I0419 23:57:06.588936   83754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 23:57:06.589145   83754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	W0419 23:57:06.589280   83754 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18703-76456/.minikube/config/config.json: open /home/jenkins/minikube-integration/18703-76456/.minikube/config/config.json: no such file or directory
	I0419 23:57:06.589877   83754 out.go:298] Setting JSON to true
	I0419 23:57:06.590811   83754 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9574,"bootTime":1713561453,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 23:57:06.590874   83754 start.go:139] virtualization: kvm guest
	I0419 23:57:06.593507   83754 out.go:97] [download-only-347670] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 23:57:06.595034   83754 out.go:169] MINIKUBE_LOCATION=18703
	W0419 23:57:06.593671   83754 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball: no such file or directory
	I0419 23:57:06.593749   83754 notify.go:220] Checking for updates...
	I0419 23:57:06.597774   83754 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 23:57:06.599223   83754 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0419 23:57:06.600473   83754 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0419 23:57:06.601664   83754 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0419 23:57:06.603963   83754 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0419 23:57:06.604236   83754 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 23:57:06.638608   83754 out.go:97] Using the kvm2 driver based on user configuration
	I0419 23:57:06.638661   83754 start.go:297] selected driver: kvm2
	I0419 23:57:06.638674   83754 start.go:901] validating driver "kvm2" against <nil>
	I0419 23:57:06.639056   83754 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 23:57:06.639159   83754 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0419 23:57:06.654499   83754 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0419 23:57:06.654579   83754 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 23:57:06.655051   83754 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0419 23:57:06.655245   83754 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0419 23:57:06.655316   83754 cni.go:84] Creating CNI manager for ""
	I0419 23:57:06.655331   83754 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 23:57:06.655339   83754 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 23:57:06.655411   83754 start.go:340] cluster config:
	{Name:download-only-347670 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-347670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 23:57:06.655606   83754 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 23:57:06.657564   83754 out.go:97] Downloading VM boot image ...
	I0419 23:57:06.657608   83754 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18703-76456/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0419 23:57:09.815209   83754 out.go:97] Starting "download-only-347670" primary control-plane node in "download-only-347670" cluster
	I0419 23:57:09.815254   83754 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0419 23:57:09.841217   83754 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0419 23:57:09.841246   83754 cache.go:56] Caching tarball of preloaded images
	I0419 23:57:09.841387   83754 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0419 23:57:09.843043   83754 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0419 23:57:09.843066   83754 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0419 23:57:09.866700   83754 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-347670 host does not exist
	  To start a cluster, run: "minikube start -p download-only-347670"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-347670
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (5.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-740714 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-740714 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.546936267s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (5.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-740714
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-740714: exit status 85 (72.704653ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-347670 | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC |                     |
	|         | -p download-only-347670        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC | 19 Apr 24 23:57 UTC |
	| delete  | -p download-only-347670        | download-only-347670 | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC | 19 Apr 24 23:57 UTC |
	| start   | -o=json --download-only        | download-only-740714 | jenkins | v1.33.0 | 19 Apr 24 23:57 UTC |                     |
	|         | -p download-only-740714        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 23:57:16
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 23:57:16.057546   83934 out.go:291] Setting OutFile to fd 1 ...
	I0419 23:57:16.057763   83934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 23:57:16.057772   83934 out.go:304] Setting ErrFile to fd 2...
	I0419 23:57:16.057776   83934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 23:57:16.057934   83934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0419 23:57:16.058493   83934 out.go:298] Setting JSON to true
	I0419 23:57:16.059303   83934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9583,"bootTime":1713561453,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 23:57:16.059356   83934 start.go:139] virtualization: kvm guest
	I0419 23:57:16.061427   83934 out.go:97] [download-only-740714] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 23:57:16.062803   83934 out.go:169] MINIKUBE_LOCATION=18703
	I0419 23:57:16.061636   83934 notify.go:220] Checking for updates...
	I0419 23:57:16.065611   83934 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 23:57:16.067033   83934 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0419 23:57:16.068330   83934 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0419 23:57:16.069600   83934 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0419 23:57:16.072041   83934 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0419 23:57:16.072299   83934 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 23:57:16.103227   83934 out.go:97] Using the kvm2 driver based on user configuration
	I0419 23:57:16.103259   83934 start.go:297] selected driver: kvm2
	I0419 23:57:16.103268   83934 start.go:901] validating driver "kvm2" against <nil>
	I0419 23:57:16.103609   83934 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 23:57:16.103693   83934 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18703-76456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0419 23:57:16.118342   83934 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0419 23:57:16.118391   83934 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 23:57:16.118816   83934 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0419 23:57:16.118951   83934 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0419 23:57:16.119011   83934 cni.go:84] Creating CNI manager for ""
	I0419 23:57:16.119022   83934 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 23:57:16.119033   83934 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 23:57:16.119084   83934 start.go:340] cluster config:
	{Name:download-only-740714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-740714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 23:57:16.119183   83934 iso.go:125] acquiring lock: {Name:mk84b6faf36a4fd912f40504fcac14cc85cea6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 23:57:16.120792   83934 out.go:97] Starting "download-only-740714" primary control-plane node in "download-only-740714" cluster
	I0419 23:57:16.120809   83934 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 23:57:16.144339   83934 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0419 23:57:16.144353   83934 cache.go:56] Caching tarball of preloaded images
	I0419 23:57:16.144462   83934 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 23:57:16.145916   83934 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0419 23:57:16.145930   83934 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 ...
	I0419 23:57:16.172097   83934 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:5927bd9d05f26d08fc05540d1d92e5d8 -> /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0419 23:57:20.078515   83934 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 ...
	I0419 23:57:20.078606   83934 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18703-76456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 ...
	I0419 23:57:20.805765   83934 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 23:57:20.806167   83934 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/download-only-740714/config.json ...
	I0419 23:57:20.806211   83934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/download-only-740714/config.json: {Name:mk56ff1e3771f07128a283dde0053ae49f54a3b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 23:57:20.806413   83934 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 23:57:20.806573   83934 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18703-76456/.minikube/cache/linux/amd64/v1.30.0/kubectl
	
	
	* The control-plane node download-only-740714 host does not exist
	  To start a cluster, run: "minikube start -p download-only-740714"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-740714
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-466470 --alsologtostderr --binary-mirror http://127.0.0.1:39973 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-466470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-466470
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (116.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-255012 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-255012 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m56.079799097s)
helpers_test.go:175: Cleaning up "offline-crio-255012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-255012
--- PASS: TestOffline (116.91s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-903502
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-903502: exit status 85 (65.820818ms)

                                                
                                                
-- stdout --
	* Profile "addons-903502" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-903502"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-903502
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-903502: exit status 85 (61.513419ms)

                                                
                                                
-- stdout --
	* Profile "addons-903502" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-903502"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (185.02s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-903502 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-903502 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m5.021624437s)
--- PASS: TestAddons/Setup (185.02s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 23.666422ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-qdwvn" [35c4ac3f-fc00-413c-b0e4-a411f7888bf5] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005341434s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jstzq" [f7e2cb22-44fa-4141-9d32-90e8315b38f4] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009860976s
addons_test.go:340: (dbg) Run:  kubectl --context addons-903502 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-903502 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-903502 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.069334928s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-903502 ip
2024/04/20 00:00:43 [DEBUG] GET http://192.168.39.36:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-903502 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.17s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.99s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ndkxf" [23b60c81-8c36-4525-b5fd-6679455e32e8] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005387266s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-903502
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-903502: (5.981501843s)
--- PASS: TestAddons/parallel/InspektorGadget (10.99s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.67s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.265289ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-cjckf" [9d3c558e-6fdb-4a44-b71f-4353e1043b27] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005580345s
addons_test.go:473: (dbg) Run:  kubectl --context addons-903502 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-903502 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.057489335s)
addons_test.go:478: kubectl --context addons-903502 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: error stream protocol error: unknown error
addons_test.go:473: (dbg) Run:  kubectl --context addons-903502 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-903502 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.261288297s)
addons_test.go:478: kubectl --context addons-903502 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: Internal error occurred: unable to upgrade connection: container helm-test not found in pod helm-test_kube-system
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-903502 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (15.67s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 26.332815ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-903502 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-903502 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d8bca96f-bc8d-44c3-bf0e-0936b253d947] Pending
helpers_test.go:344: "task-pv-pod" [d8bca96f-bc8d-44c3-bf0e-0936b253d947] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d8bca96f-bc8d-44c3-bf0e-0936b253d947] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.005370088s
addons_test.go:584: (dbg) Run:  kubectl --context addons-903502 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-903502 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-903502 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-903502 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-903502 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-903502 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-903502 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c54b0683-814d-4b8a-8af4-3d470408bafd] Pending
helpers_test.go:344: "task-pv-pod-restore" [c54b0683-814d-4b8a-8af4-3d470408bafd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c54b0683-814d-4b8a-8af4-3d470408bafd] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004197536s
addons_test.go:626: (dbg) Run:  kubectl --context addons-903502 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-903502 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-903502 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-903502 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-903502 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.998452394s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-903502 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (64.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-903502 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-903502 --alsologtostderr -v=1: (1.190364057s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-g8dbz" [6faf5229-91df-43d8-9dc0-e15e7d5d5f1d] Pending
helpers_test.go:344: "headlamp-7559bf459f-g8dbz" [6faf5229-91df-43d8-9dc0-e15e7d5d5f1d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-g8dbz" [6faf5229-91df-43d8-9dc0-e15e7d5d5f1d] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-g8dbz" [6faf5229-91df-43d8-9dc0-e15e7d5d5f1d] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004714578s
--- PASS: TestAddons/parallel/Headlamp (13.20s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-9qzcv" [e64da815-cc50-435f-999c-6456faf9dfba] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004902508s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-903502
--- PASS: TestAddons/parallel/CloudSpanner (6.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.54s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-903502 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-903502 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903502 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7fdfff8e-01a9-4d04-bf0d-37531f492ed3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7fdfff8e-01a9-4d04-bf0d-37531f492ed3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7fdfff8e-01a9-4d04-bf0d-37531f492ed3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.005319554s
addons_test.go:891: (dbg) Run:  kubectl --context addons-903502 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-903502 ssh "cat /opt/local-path-provisioner/pvc-1c22513e-d65d-44a6-87f2-b75cdb5b79eb_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-903502 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-903502 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-903502 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-903502 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.589124197s)
--- PASS: TestAddons/parallel/LocalPath (51.54s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gxtqp" [e35a27ed-f4cb-4e7f-a1c3-b0ddcc6c2546] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005558593s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-903502
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-s6wnr" [63506f40-47b2-404e-bcd0-27cca6d4d119] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004768347s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-903502 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-903502 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (66.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-744712 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-744712 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m4.712294289s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-744712 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-744712 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-744712 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-744712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-744712
--- PASS: TestCertOptions (66.18s)

                                                
                                    
x
+
TestCertExpiration (334.37s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-692221 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-692221 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m51.546471125s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-692221 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-692221 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (41.738750229s)
helpers_test.go:175: Cleaning up "cert-expiration-692221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-692221
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-692221: (1.085963304s)
--- PASS: TestCertExpiration (334.37s)

                                                
                                    
x
+
TestForceSystemdFlag (103.54s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-260413 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-260413 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m42.292289071s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-260413 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-260413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-260413
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-260413: (1.016457421s)
--- PASS: TestForceSystemdFlag (103.54s)

                                                
                                    
x
+
TestForceSystemdEnv (46.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-339159 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-339159 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.428455675s)
helpers_test.go:175: Cleaning up "force-systemd-env-339159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-339159
--- PASS: TestForceSystemdEnv (46.22s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.27s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.27s)

                                                
                                    
x
+
TestErrorSpam/setup (45.26s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-017645 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-017645 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-017645 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-017645 --driver=kvm2  --container-runtime=crio: (45.26290543s)
--- PASS: TestErrorSpam/setup (45.26s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 pause
--- PASS: TestErrorSpam/pause (1.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (6.32s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 stop: (2.303276435s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 stop: (1.982325478s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-017645 --log_dir /tmp/nospam-017645 stop: (2.038810819s)
--- PASS: TestErrorSpam/stop (6.32s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18703-76456/.minikube/files/etc/test/nested/copy/83742/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (97.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-238176 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0420 00:10:27.815511   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 00:10:27.821362   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 00:10:27.831663   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 00:10:27.851993   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 00:10:27.892345   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 00:10:27.972715   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 00:10:28.133141   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 00:10:28.453732   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 00:10:29.094578   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 00:10:30.375085   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 00:10:32.935737   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 00:10:38.056318   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 00:10:48.296528   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 00:11:08.777515   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-238176 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m37.063294554s)
--- PASS: TestFunctional/serial/StartWithProxy (97.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (52.67s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-238176 --alsologtostderr -v=8
E0420 00:11:49.739405   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-238176 --alsologtostderr -v=8: (52.671402142s)
functional_test.go:659: soft start took 52.672027934s for "functional-238176" cluster.
--- PASS: TestFunctional/serial/SoftStart (52.67s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-238176 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-238176 cache add registry.k8s.io/pause:3.3: (1.183239771s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-238176 cache add registry.k8s.io/pause:latest: (1.050108096s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-238176 /tmp/TestFunctionalserialCacheCmdcacheadd_local3768884996/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 cache add minikube-local-cache-test:functional-238176
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 cache delete minikube-local-cache-test:functional-238176
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-238176
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238176 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (229.092888ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 kubectl -- --context functional-238176 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-238176 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-238176 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-238176 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.157752246s)
functional_test.go:757: restart took 35.157861359s for "functional-238176" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.16s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-238176 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-238176 logs: (1.584799675s)
--- PASS: TestFunctional/serial/LogsCmd (1.59s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 logs --file /tmp/TestFunctionalserialLogsFileCmd3176607948/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-238176 logs --file /tmp/TestFunctionalserialLogsFileCmd3176607948/001/logs.txt: (1.637566592s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.35s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-238176 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-238176
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-238176: exit status 115 (285.573696ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.100:30233 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-238176 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238176 config get cpus: exit status 14 (70.316334ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238176 config get cpus: exit status 14 (75.324776ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (44.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-238176 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-238176 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 93193: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (44.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-238176 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-238176 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (155.964533ms)

                                                
                                                
-- stdout --
	* [functional-238176] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 00:13:24.637605   92720 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:13:24.637906   92720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:13:24.637920   92720 out.go:304] Setting ErrFile to fd 2...
	I0420 00:13:24.637928   92720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:13:24.638600   92720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:13:24.639533   92720 out.go:298] Setting JSON to false
	I0420 00:13:24.641243   92720 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":10552,"bootTime":1713561453,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 00:13:24.641365   92720 start.go:139] virtualization: kvm guest
	I0420 00:13:24.643236   92720 out.go:177] * [functional-238176] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 00:13:24.644965   92720 notify.go:220] Checking for updates...
	I0420 00:13:24.644972   92720 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 00:13:24.646728   92720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 00:13:24.648063   92720 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 00:13:24.649337   92720 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:13:24.650552   92720 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 00:13:24.651670   92720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 00:13:24.653105   92720 config.go:182] Loaded profile config "functional-238176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:13:24.653561   92720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:13:24.653622   92720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:13:24.668683   92720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
	I0420 00:13:24.669173   92720 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:13:24.669715   92720 main.go:141] libmachine: Using API Version  1
	I0420 00:13:24.669739   92720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:13:24.670049   92720 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:13:24.670184   92720 main.go:141] libmachine: (functional-238176) Calling .DriverName
	I0420 00:13:24.670433   92720 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 00:13:24.670723   92720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:13:24.670757   92720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:13:24.685202   92720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34815
	I0420 00:13:24.685614   92720 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:13:24.686082   92720 main.go:141] libmachine: Using API Version  1
	I0420 00:13:24.686115   92720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:13:24.686458   92720 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:13:24.686725   92720 main.go:141] libmachine: (functional-238176) Calling .DriverName
	I0420 00:13:24.719155   92720 out.go:177] * Using the kvm2 driver based on existing profile
	I0420 00:13:24.720547   92720 start.go:297] selected driver: kvm2
	I0420 00:13:24.720562   92720 start.go:901] validating driver "kvm2" against &{Name:functional-238176 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-238176 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:13:24.720733   92720 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 00:13:24.722889   92720 out.go:177] 
	W0420 00:13:24.724108   92720 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0420 00:13:24.725344   92720 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-238176 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-238176 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-238176 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (163.181251ms)

                                                
                                                
-- stdout --
	* [functional-238176] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 00:13:24.471690   92659 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:13:24.471863   92659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:13:24.471896   92659 out.go:304] Setting ErrFile to fd 2...
	I0420 00:13:24.471914   92659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:13:24.472195   92659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:13:24.472713   92659 out.go:298] Setting JSON to false
	I0420 00:13:24.473813   92659 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":10551,"bootTime":1713561453,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 00:13:24.473898   92659 start.go:139] virtualization: kvm guest
	I0420 00:13:24.476136   92659 out.go:177] * [functional-238176] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	I0420 00:13:24.477707   92659 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 00:13:24.477729   92659 notify.go:220] Checking for updates...
	I0420 00:13:24.479221   92659 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 00:13:24.480614   92659 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 00:13:24.481914   92659 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 00:13:24.483194   92659 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 00:13:24.484499   92659 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 00:13:24.486146   92659 config.go:182] Loaded profile config "functional-238176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:13:24.486741   92659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:13:24.486827   92659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:13:24.506018   92659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44325
	I0420 00:13:24.506523   92659 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:13:24.507199   92659 main.go:141] libmachine: Using API Version  1
	I0420 00:13:24.507222   92659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:13:24.507597   92659 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:13:24.507801   92659 main.go:141] libmachine: (functional-238176) Calling .DriverName
	I0420 00:13:24.508015   92659 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 00:13:24.508331   92659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:13:24.508365   92659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:13:24.523408   92659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40215
	I0420 00:13:24.523917   92659 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:13:24.524478   92659 main.go:141] libmachine: Using API Version  1
	I0420 00:13:24.524504   92659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:13:24.524837   92659 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:13:24.525030   92659 main.go:141] libmachine: (functional-238176) Calling .DriverName
	I0420 00:13:24.562803   92659 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0420 00:13:24.564006   92659 start.go:297] selected driver: kvm2
	I0420 00:13:24.564020   92659 start.go:901] validating driver "kvm2" against &{Name:functional-238176 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-238176 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:13:24.564130   92659 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 00:13:24.566053   92659 out.go:177] 
	W0420 00:13:24.567388   92659 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0420 00:13:24.568743   92659 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-238176 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-238176 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-jhpkv" [58648814-73e1-4c77-8874-9de5be69c068] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-jhpkv" [58648814-73e1-4c77-8874-9de5be69c068] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.00543104s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.100:32048
functional_test.go:1671: http://192.168.39.100:32048: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-jhpkv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.100:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.100:32048
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
E0420 00:13:11.660352   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
helpers_test.go:344: "storage-provisioner" [c211c319-b51a-4758-9a50-4df9556bd5b7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005139701s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-238176 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-238176 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-238176 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-238176 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [23c1ca99-92d8-4220-9908-7166aba7e727] Pending
helpers_test.go:344: "sp-pod" [23c1ca99-92d8-4220-9908-7166aba7e727] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [23c1ca99-92d8-4220-9908-7166aba7e727] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004949182s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-238176 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-238176 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-238176 delete -f testdata/storage-provisioner/pod.yaml: (3.22171944s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-238176 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [13c9def7-1065-4cb6-a057-79d87e57a023] Pending
helpers_test.go:344: "sp-pod" [13c9def7-1065-4cb6-a057-79d87e57a023] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [13c9def7-1065-4cb6-a057-79d87e57a023] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.012974508s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-238176 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.49s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh -n functional-238176 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 cp functional-238176:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4286580394/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh -n functional-238176 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh -n functional-238176 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-238176 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-r9jms" [3da8a958-3214-4673-b392-059d065a0508] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-r9jms" [3da8a958-3214-4673-b392-059d065a0508] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.011465309s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-238176 exec mysql-64454c8b5c-r9jms -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-238176 exec mysql-64454c8b5c-r9jms -- mysql -ppassword -e "show databases;": exit status 1 (139.920913ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-238176 exec mysql-64454c8b5c-r9jms -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-238176 exec mysql-64454c8b5c-r9jms -- mysql -ppassword -e "show databases;": exit status 1 (148.816084ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-238176 exec mysql-64454c8b5c-r9jms -- mysql -ppassword -e "show databases;"
2024/04/20 00:14:08 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (31.71s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/83742/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "sudo cat /etc/test/nested/copy/83742/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/83742.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "sudo cat /etc/ssl/certs/83742.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/83742.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "sudo cat /usr/share/ca-certificates/83742.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/837422.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "sudo cat /etc/ssl/certs/837422.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/837422.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "sudo cat /usr/share/ca-certificates/837422.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-238176 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238176 ssh "sudo systemctl is-active docker": exit status 1 (256.278131ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238176 ssh "sudo systemctl is-active containerd": exit status 1 (254.38424ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-238176 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-238176 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-t262l" [be4138d3-ff69-4121-9a31-8be13a2faf63] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-t262l" [be4138d3-ff69-4121-9a31-8be13a2faf63] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004650344s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "283.461041ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "70.692308ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "312.563547ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "59.78236ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-238176 /tmp/TestFunctionalparallelMountCmdany-port49921379/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713571993661895130" to /tmp/TestFunctionalparallelMountCmdany-port49921379/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713571993661895130" to /tmp/TestFunctionalparallelMountCmdany-port49921379/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713571993661895130" to /tmp/TestFunctionalparallelMountCmdany-port49921379/001/test-1713571993661895130
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238176 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (267.168946ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 20 00:13 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 20 00:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 20 00:13 test-1713571993661895130
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh cat /mount-9p/test-1713571993661895130
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-238176 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4c0b5cc8-944c-417b-9595-ff6329ce3a4a] Pending
helpers_test.go:344: "busybox-mount" [4c0b5cc8-944c-417b-9595-ff6329ce3a4a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4c0b5cc8-944c-417b-9595-ff6329ce3a4a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4c0b5cc8-944c-417b-9595-ff6329ce3a4a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004355387s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-238176 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-238176 /tmp/TestFunctionalparallelMountCmdany-port49921379/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-238176 /tmp/TestFunctionalparallelMountCmdspecific-port2996532713/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238176 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (225.885431ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-238176 /tmp/TestFunctionalparallelMountCmdspecific-port2996532713/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238176 ssh "sudo umount -f /mount-9p": exit status 1 (303.049479ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-238176 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-238176 /tmp/TestFunctionalparallelMountCmdspecific-port2996532713/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-238176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2094148443/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-238176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2094148443/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-238176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2094148443/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-238176 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-238176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2094148443/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-238176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2094148443/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-238176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2094148443/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 service list -o json
functional_test.go:1490: Took "316.285982ms" to run "out/minikube-linux-amd64 -p functional-238176 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.100:32519
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.100:32519
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-238176 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-238176
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-238176 image ls --format short --alsologtostderr:
I0420 00:13:50.998870   93788 out.go:291] Setting OutFile to fd 1 ...
I0420 00:13:50.999171   93788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 00:13:50.999184   93788 out.go:304] Setting ErrFile to fd 2...
I0420 00:13:50.999190   93788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 00:13:50.999443   93788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
I0420 00:13:51.000176   93788 config.go:182] Loaded profile config "functional-238176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 00:13:51.000291   93788 config.go:182] Loaded profile config "functional-238176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 00:13:51.000676   93788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0420 00:13:51.000719   93788 main.go:141] libmachine: Launching plugin server for driver kvm2
I0420 00:13:51.015103   93788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43839
I0420 00:13:51.015515   93788 main.go:141] libmachine: () Calling .GetVersion
I0420 00:13:51.016048   93788 main.go:141] libmachine: Using API Version  1
I0420 00:13:51.016073   93788 main.go:141] libmachine: () Calling .SetConfigRaw
I0420 00:13:51.016396   93788 main.go:141] libmachine: () Calling .GetMachineName
I0420 00:13:51.016582   93788 main.go:141] libmachine: (functional-238176) Calling .GetState
I0420 00:13:51.018394   93788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0420 00:13:51.018432   93788 main.go:141] libmachine: Launching plugin server for driver kvm2
I0420 00:13:51.032771   93788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34707
I0420 00:13:51.033090   93788 main.go:141] libmachine: () Calling .GetVersion
I0420 00:13:51.033546   93788 main.go:141] libmachine: Using API Version  1
I0420 00:13:51.033566   93788 main.go:141] libmachine: () Calling .SetConfigRaw
I0420 00:13:51.033911   93788 main.go:141] libmachine: () Calling .GetMachineName
I0420 00:13:51.034104   93788 main.go:141] libmachine: (functional-238176) Calling .DriverName
I0420 00:13:51.034312   93788 ssh_runner.go:195] Run: systemctl --version
I0420 00:13:51.034340   93788 main.go:141] libmachine: (functional-238176) Calling .GetSSHHostname
I0420 00:13:51.036841   93788 main.go:141] libmachine: (functional-238176) DBG | domain functional-238176 has defined MAC address 52:54:00:87:cf:b4 in network mk-functional-238176
I0420 00:13:51.037245   93788 main.go:141] libmachine: (functional-238176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:cf:b4", ip: ""} in network mk-functional-238176: {Iface:virbr1 ExpiryTime:2024-04-20 01:10:07 +0000 UTC Type:0 Mac:52:54:00:87:cf:b4 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:functional-238176 Clientid:01:52:54:00:87:cf:b4}
I0420 00:13:51.037271   93788 main.go:141] libmachine: (functional-238176) DBG | domain functional-238176 has defined IP address 192.168.39.100 and MAC address 52:54:00:87:cf:b4 in network mk-functional-238176
I0420 00:13:51.037364   93788 main.go:141] libmachine: (functional-238176) Calling .GetSSHPort
I0420 00:13:51.037528   93788 main.go:141] libmachine: (functional-238176) Calling .GetSSHKeyPath
I0420 00:13:51.037672   93788 main.go:141] libmachine: (functional-238176) Calling .GetSSHUsername
I0420 00:13:51.037782   93788 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/functional-238176/id_rsa Username:docker}
I0420 00:13:51.120364   93788 ssh_runner.go:195] Run: sudo crictl images --output json
I0420 00:13:51.175087   93788 main.go:141] libmachine: Making call to close driver server
I0420 00:13:51.175103   93788 main.go:141] libmachine: (functional-238176) Calling .Close
I0420 00:13:51.175360   93788 main.go:141] libmachine: Successfully made call to close driver server
I0420 00:13:51.175379   93788 main.go:141] libmachine: Making call to close connection to plugin binary
I0420 00:13:51.175387   93788 main.go:141] libmachine: Making call to close driver server
I0420 00:13:51.175412   93788 main.go:141] libmachine: (functional-238176) Calling .Close
I0420 00:13:51.175625   93788 main.go:141] libmachine: Successfully made call to close driver server
I0420 00:13:51.175663   93788 main.go:141] libmachine: (functional-238176) DBG | Closing plugin on server side
I0420 00:13:51.175671   93788 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-238176 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver          | v1.30.0            | c42f13656d0b2 | 118MB  |
| docker.io/library/nginx                 | latest             | 2ac752d7aeb1d | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-238176  | 233ca57b24c6a | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-scheduler          | v1.30.0            | 259c8277fcbbc | 63MB   |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/minikube-local-cache-test     | functional-238176  | dc9603243a78e | 3.33kB |
| registry.k8s.io/kube-controller-manager | v1.30.0            | c7aad43836fa5 | 112MB  |
| registry.k8s.io/kube-proxy              | v1.30.0            | a0bf559e280cf | 85.9MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-238176 image ls --format table --alsologtostderr:
I0420 00:13:54.877203   93964 out.go:291] Setting OutFile to fd 1 ...
I0420 00:13:54.877322   93964 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 00:13:54.877330   93964 out.go:304] Setting ErrFile to fd 2...
I0420 00:13:54.877336   93964 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 00:13:54.877557   93964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
I0420 00:13:54.878152   93964 config.go:182] Loaded profile config "functional-238176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 00:13:54.878268   93964 config.go:182] Loaded profile config "functional-238176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 00:13:54.878647   93964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0420 00:13:54.878712   93964 main.go:141] libmachine: Launching plugin server for driver kvm2
I0420 00:13:54.893546   93964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39111
I0420 00:13:54.893987   93964 main.go:141] libmachine: () Calling .GetVersion
I0420 00:13:54.894552   93964 main.go:141] libmachine: Using API Version  1
I0420 00:13:54.894572   93964 main.go:141] libmachine: () Calling .SetConfigRaw
I0420 00:13:54.894967   93964 main.go:141] libmachine: () Calling .GetMachineName
I0420 00:13:54.895212   93964 main.go:141] libmachine: (functional-238176) Calling .GetState
I0420 00:13:54.896909   93964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0420 00:13:54.896952   93964 main.go:141] libmachine: Launching plugin server for driver kvm2
I0420 00:13:54.911495   93964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33815
I0420 00:13:54.911872   93964 main.go:141] libmachine: () Calling .GetVersion
I0420 00:13:54.912295   93964 main.go:141] libmachine: Using API Version  1
I0420 00:13:54.912320   93964 main.go:141] libmachine: () Calling .SetConfigRaw
I0420 00:13:54.912682   93964 main.go:141] libmachine: () Calling .GetMachineName
I0420 00:13:54.912858   93964 main.go:141] libmachine: (functional-238176) Calling .DriverName
I0420 00:13:54.913047   93964 ssh_runner.go:195] Run: systemctl --version
I0420 00:13:54.913070   93964 main.go:141] libmachine: (functional-238176) Calling .GetSSHHostname
I0420 00:13:54.915590   93964 main.go:141] libmachine: (functional-238176) DBG | domain functional-238176 has defined MAC address 52:54:00:87:cf:b4 in network mk-functional-238176
I0420 00:13:54.916063   93964 main.go:141] libmachine: (functional-238176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:cf:b4", ip: ""} in network mk-functional-238176: {Iface:virbr1 ExpiryTime:2024-04-20 01:10:07 +0000 UTC Type:0 Mac:52:54:00:87:cf:b4 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:functional-238176 Clientid:01:52:54:00:87:cf:b4}
I0420 00:13:54.916100   93964 main.go:141] libmachine: (functional-238176) DBG | domain functional-238176 has defined IP address 192.168.39.100 and MAC address 52:54:00:87:cf:b4 in network mk-functional-238176
I0420 00:13:54.916225   93964 main.go:141] libmachine: (functional-238176) Calling .GetSSHPort
I0420 00:13:54.916415   93964 main.go:141] libmachine: (functional-238176) Calling .GetSSHKeyPath
I0420 00:13:54.916565   93964 main.go:141] libmachine: (functional-238176) Calling .GetSSHUsername
I0420 00:13:54.916741   93964 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/functional-238176/id_rsa Username:docker}
I0420 00:13:55.012337   93964 ssh_runner.go:195] Run: sudo crictl images --output json
I0420 00:13:55.110907   93964 main.go:141] libmachine: Making call to close driver server
I0420 00:13:55.110926   93964 main.go:141] libmachine: (functional-238176) Calling .Close
I0420 00:13:55.111233   93964 main.go:141] libmachine: (functional-238176) DBG | Closing plugin on server side
I0420 00:13:55.111276   93964 main.go:141] libmachine: Successfully made call to close driver server
I0420 00:13:55.111304   93964 main.go:141] libmachine: Making call to close connection to plugin binary
I0420 00:13:55.111321   93964 main.go:141] libmachine: Making call to close driver server
I0420 00:13:55.111333   93964 main.go:141] libmachine: (functional-238176) Calling .Close
I0420 00:13:55.111575   93964 main.go:141] libmachine: (functional-238176) DBG | Closing plugin on server side
I0420 00:13:55.111636   93964 main.go:141] libmachine: Successfully made call to close driver server
I0420 00:13:55.111661   93964 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-238176 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12
-0"],"size":"150779692"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67","registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"63026502"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"233ca57b24c6ad4b612d433092e4b95e0214b5d6fe6dfefb6a1f51afc3cb038e","repoDigests":["localhost/my-image@sha256:57f0ac115aa1ad93fecd51bd94cfcabed5477fc80210171848ed071d3ac5ea6c"],"repoTags":["localhost/my-image:functional-238176"],"size":"1468600"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117609952"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83f
d5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe","registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"],"repoTags":["registry.k8s.io/kube-controll
er-manager:v1.30.0"],"size":"112170310"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":["registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"85932953"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"d7934abb16d7dc6cc2cc910ae848ddc822
85f26bec55f34bf28f7665da2c9ec4","repoDigests":["docker.io/library/93ffe20ceb47a7c845c20198fbbc1921349174a66e7c83b3c6529947cead3edc-tmp@sha256:bf3676fc10f5c3d9daecca3374bbcac0d1859bd1fa71b292a81e3150bf0e15bf"],"repoTags":[],"size":"1466018"},{"id":"2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580","repoDigests":["docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419","docker.io/library/nginx@sha256:b5873c5e785c0ae70b4f999d6719a27441126667088c2edd1eaf3060e4868ec5"],"repoTags":["docker.io/library/nginx:latest"],"size":"191703878"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"dc9603243a78e35b42fa07cf03df1c1
1679c929ff0136edfc0811051046a870b","repoDigests":["localhost/minikube-local-cache-test@sha256:616b543b1dc969143b517b2ff1970dcee76525034c6e5d42ccf9e29291500bb6"],"repoTags":["localhost/minikube-local-cache-test:functional-238176"],"size":"3330"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822
659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-238176 image ls --format json --alsologtostderr:
I0420 00:13:54.578750   93933 out.go:291] Setting OutFile to fd 1 ...
I0420 00:13:54.579314   93933 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 00:13:54.579334   93933 out.go:304] Setting ErrFile to fd 2...
I0420 00:13:54.579342   93933 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 00:13:54.579783   93933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
I0420 00:13:54.581115   93933 config.go:182] Loaded profile config "functional-238176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 00:13:54.581289   93933 config.go:182] Loaded profile config "functional-238176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 00:13:54.581847   93933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0420 00:13:54.581917   93933 main.go:141] libmachine: Launching plugin server for driver kvm2
I0420 00:13:54.597557   93933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44397
I0420 00:13:54.598085   93933 main.go:141] libmachine: () Calling .GetVersion
I0420 00:13:54.598606   93933 main.go:141] libmachine: Using API Version  1
I0420 00:13:54.598627   93933 main.go:141] libmachine: () Calling .SetConfigRaw
I0420 00:13:54.598960   93933 main.go:141] libmachine: () Calling .GetMachineName
I0420 00:13:54.599221   93933 main.go:141] libmachine: (functional-238176) Calling .GetState
I0420 00:13:54.601027   93933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0420 00:13:54.601069   93933 main.go:141] libmachine: Launching plugin server for driver kvm2
I0420 00:13:54.616059   93933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35657
I0420 00:13:54.616517   93933 main.go:141] libmachine: () Calling .GetVersion
I0420 00:13:54.617016   93933 main.go:141] libmachine: Using API Version  1
I0420 00:13:54.617041   93933 main.go:141] libmachine: () Calling .SetConfigRaw
I0420 00:13:54.617369   93933 main.go:141] libmachine: () Calling .GetMachineName
I0420 00:13:54.617555   93933 main.go:141] libmachine: (functional-238176) Calling .DriverName
I0420 00:13:54.617760   93933 ssh_runner.go:195] Run: systemctl --version
I0420 00:13:54.617785   93933 main.go:141] libmachine: (functional-238176) Calling .GetSSHHostname
I0420 00:13:54.620587   93933 main.go:141] libmachine: (functional-238176) DBG | domain functional-238176 has defined MAC address 52:54:00:87:cf:b4 in network mk-functional-238176
I0420 00:13:54.621004   93933 main.go:141] libmachine: (functional-238176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:cf:b4", ip: ""} in network mk-functional-238176: {Iface:virbr1 ExpiryTime:2024-04-20 01:10:07 +0000 UTC Type:0 Mac:52:54:00:87:cf:b4 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:functional-238176 Clientid:01:52:54:00:87:cf:b4}
I0420 00:13:54.621033   93933 main.go:141] libmachine: (functional-238176) DBG | domain functional-238176 has defined IP address 192.168.39.100 and MAC address 52:54:00:87:cf:b4 in network mk-functional-238176
I0420 00:13:54.621206   93933 main.go:141] libmachine: (functional-238176) Calling .GetSSHPort
I0420 00:13:54.621386   93933 main.go:141] libmachine: (functional-238176) Calling .GetSSHKeyPath
I0420 00:13:54.621540   93933 main.go:141] libmachine: (functional-238176) Calling .GetSSHUsername
I0420 00:13:54.621776   93933 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/functional-238176/id_rsa Username:docker}
I0420 00:13:54.713367   93933 ssh_runner.go:195] Run: sudo crictl images --output json
I0420 00:13:54.809276   93933 main.go:141] libmachine: Making call to close driver server
I0420 00:13:54.809291   93933 main.go:141] libmachine: (functional-238176) Calling .Close
I0420 00:13:54.809721   93933 main.go:141] libmachine: (functional-238176) DBG | Closing plugin on server side
I0420 00:13:54.809771   93933 main.go:141] libmachine: Successfully made call to close driver server
I0420 00:13:54.809782   93933 main.go:141] libmachine: Making call to close connection to plugin binary
I0420 00:13:54.809792   93933 main.go:141] libmachine: Making call to close driver server
I0420 00:13:54.809801   93933 main.go:141] libmachine: (functional-238176) Calling .Close
I0420 00:13:54.810226   93933 main.go:141] libmachine: (functional-238176) DBG | Closing plugin on server side
I0420 00:13:54.810284   93933 main.go:141] libmachine: Successfully made call to close driver server
I0420 00:13:54.810305   93933 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-238176 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests:
- registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "85932953"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580
repoDigests:
- docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419
- docker.io/library/nginx@sha256:b5873c5e785c0ae70b4f999d6719a27441126667088c2edd1eaf3060e4868ec5
repoTags:
- docker.io/library/nginx:latest
size: "191703878"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: dc9603243a78e35b42fa07cf03df1c11679c929ff0136edfc0811051046a870b
repoDigests:
- localhost/minikube-local-cache-test@sha256:616b543b1dc969143b517b2ff1970dcee76525034c6e5d42ccf9e29291500bb6
repoTags:
- localhost/minikube-local-cache-test:functional-238176
size: "3330"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117609952"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
- registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "112170310"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
- registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "63026502"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-238176 image ls --format yaml --alsologtostderr:
I0420 00:13:51.230797   93812 out.go:291] Setting OutFile to fd 1 ...
I0420 00:13:51.231163   93812 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 00:13:51.231176   93812 out.go:304] Setting ErrFile to fd 2...
I0420 00:13:51.231183   93812 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 00:13:51.232514   93812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
I0420 00:13:51.233130   93812 config.go:182] Loaded profile config "functional-238176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 00:13:51.233248   93812 config.go:182] Loaded profile config "functional-238176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 00:13:51.233673   93812 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0420 00:13:51.233721   93812 main.go:141] libmachine: Launching plugin server for driver kvm2
I0420 00:13:51.248996   93812 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39315
I0420 00:13:51.249457   93812 main.go:141] libmachine: () Calling .GetVersion
I0420 00:13:51.250069   93812 main.go:141] libmachine: Using API Version  1
I0420 00:13:51.250104   93812 main.go:141] libmachine: () Calling .SetConfigRaw
I0420 00:13:51.250409   93812 main.go:141] libmachine: () Calling .GetMachineName
I0420 00:13:51.250598   93812 main.go:141] libmachine: (functional-238176) Calling .GetState
I0420 00:13:51.252385   93812 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0420 00:13:51.252421   93812 main.go:141] libmachine: Launching plugin server for driver kvm2
I0420 00:13:51.266374   93812 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40903
I0420 00:13:51.266803   93812 main.go:141] libmachine: () Calling .GetVersion
I0420 00:13:51.267275   93812 main.go:141] libmachine: Using API Version  1
I0420 00:13:51.267298   93812 main.go:141] libmachine: () Calling .SetConfigRaw
I0420 00:13:51.267588   93812 main.go:141] libmachine: () Calling .GetMachineName
I0420 00:13:51.267774   93812 main.go:141] libmachine: (functional-238176) Calling .DriverName
I0420 00:13:51.267952   93812 ssh_runner.go:195] Run: systemctl --version
I0420 00:13:51.267978   93812 main.go:141] libmachine: (functional-238176) Calling .GetSSHHostname
I0420 00:13:51.270682   93812 main.go:141] libmachine: (functional-238176) DBG | domain functional-238176 has defined MAC address 52:54:00:87:cf:b4 in network mk-functional-238176
I0420 00:13:51.271089   93812 main.go:141] libmachine: (functional-238176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:cf:b4", ip: ""} in network mk-functional-238176: {Iface:virbr1 ExpiryTime:2024-04-20 01:10:07 +0000 UTC Type:0 Mac:52:54:00:87:cf:b4 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:functional-238176 Clientid:01:52:54:00:87:cf:b4}
I0420 00:13:51.271122   93812 main.go:141] libmachine: (functional-238176) DBG | domain functional-238176 has defined IP address 192.168.39.100 and MAC address 52:54:00:87:cf:b4 in network mk-functional-238176
I0420 00:13:51.271272   93812 main.go:141] libmachine: (functional-238176) Calling .GetSSHPort
I0420 00:13:51.271432   93812 main.go:141] libmachine: (functional-238176) Calling .GetSSHKeyPath
I0420 00:13:51.271597   93812 main.go:141] libmachine: (functional-238176) Calling .GetSSHUsername
I0420 00:13:51.271729   93812 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/functional-238176/id_rsa Username:docker}
I0420 00:13:51.352207   93812 ssh_runner.go:195] Run: sudo crictl images --output json
I0420 00:13:51.392401   93812 main.go:141] libmachine: Making call to close driver server
I0420 00:13:51.392432   93812 main.go:141] libmachine: (functional-238176) Calling .Close
I0420 00:13:51.392703   93812 main.go:141] libmachine: Successfully made call to close driver server
I0420 00:13:51.392721   93812 main.go:141] libmachine: Making call to close connection to plugin binary
I0420 00:13:51.392732   93812 main.go:141] libmachine: Making call to close driver server
I0420 00:13:51.392739   93812 main.go:141] libmachine: (functional-238176) Calling .Close
I0420 00:13:51.392974   93812 main.go:141] libmachine: Successfully made call to close driver server
I0420 00:13:51.392991   93812 main.go:141] libmachine: Making call to close connection to plugin binary
I0420 00:13:51.393020   93812 main.go:141] libmachine: (functional-238176) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238176 ssh pgrep buildkitd: exit status 1 (191.343074ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 image build -t localhost/my-image:functional-238176 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-238176 image build -t localhost/my-image:functional-238176 testdata/build --alsologtostderr: (2.662384318s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-238176 image build -t localhost/my-image:functional-238176 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d7934abb16d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-238176
--> 233ca57b24c
Successfully tagged localhost/my-image:functional-238176
233ca57b24c6ad4b612d433092e4b95e0214b5d6fe6dfefb6a1f51afc3cb038e
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-238176 image build -t localhost/my-image:functional-238176 testdata/build --alsologtostderr:
I0420 00:13:51.645657   93866 out.go:291] Setting OutFile to fd 1 ...
I0420 00:13:51.645943   93866 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 00:13:51.645957   93866 out.go:304] Setting ErrFile to fd 2...
I0420 00:13:51.645964   93866 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 00:13:51.646203   93866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
I0420 00:13:51.646803   93866 config.go:182] Loaded profile config "functional-238176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 00:13:51.647444   93866 config.go:182] Loaded profile config "functional-238176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 00:13:51.647818   93866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0420 00:13:51.647856   93866 main.go:141] libmachine: Launching plugin server for driver kvm2
I0420 00:13:51.663963   93866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32981
I0420 00:13:51.664354   93866 main.go:141] libmachine: () Calling .GetVersion
I0420 00:13:51.664928   93866 main.go:141] libmachine: Using API Version  1
I0420 00:13:51.664950   93866 main.go:141] libmachine: () Calling .SetConfigRaw
I0420 00:13:51.665300   93866 main.go:141] libmachine: () Calling .GetMachineName
I0420 00:13:51.665530   93866 main.go:141] libmachine: (functional-238176) Calling .GetState
I0420 00:13:51.667339   93866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0420 00:13:51.667388   93866 main.go:141] libmachine: Launching plugin server for driver kvm2
I0420 00:13:51.681536   93866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
I0420 00:13:51.681966   93866 main.go:141] libmachine: () Calling .GetVersion
I0420 00:13:51.682407   93866 main.go:141] libmachine: Using API Version  1
I0420 00:13:51.682431   93866 main.go:141] libmachine: () Calling .SetConfigRaw
I0420 00:13:51.682806   93866 main.go:141] libmachine: () Calling .GetMachineName
I0420 00:13:51.682974   93866 main.go:141] libmachine: (functional-238176) Calling .DriverName
I0420 00:13:51.683131   93866 ssh_runner.go:195] Run: systemctl --version
I0420 00:13:51.683153   93866 main.go:141] libmachine: (functional-238176) Calling .GetSSHHostname
I0420 00:13:51.685907   93866 main.go:141] libmachine: (functional-238176) DBG | domain functional-238176 has defined MAC address 52:54:00:87:cf:b4 in network mk-functional-238176
I0420 00:13:51.686295   93866 main.go:141] libmachine: (functional-238176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:cf:b4", ip: ""} in network mk-functional-238176: {Iface:virbr1 ExpiryTime:2024-04-20 01:10:07 +0000 UTC Type:0 Mac:52:54:00:87:cf:b4 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:functional-238176 Clientid:01:52:54:00:87:cf:b4}
I0420 00:13:51.686321   93866 main.go:141] libmachine: (functional-238176) DBG | domain functional-238176 has defined IP address 192.168.39.100 and MAC address 52:54:00:87:cf:b4 in network mk-functional-238176
I0420 00:13:51.686465   93866 main.go:141] libmachine: (functional-238176) Calling .GetSSHPort
I0420 00:13:51.686622   93866 main.go:141] libmachine: (functional-238176) Calling .GetSSHKeyPath
I0420 00:13:51.686781   93866 main.go:141] libmachine: (functional-238176) Calling .GetSSHUsername
I0420 00:13:51.686910   93866 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/functional-238176/id_rsa Username:docker}
I0420 00:13:51.768401   93866 build_images.go:161] Building image from path: /tmp/build.1916360134.tar
I0420 00:13:51.768473   93866 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0420 00:13:51.780492   93866 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1916360134.tar
I0420 00:13:51.787950   93866 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1916360134.tar: stat -c "%s %y" /var/lib/minikube/build/build.1916360134.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1916360134.tar': No such file or directory
I0420 00:13:51.787985   93866 ssh_runner.go:362] scp /tmp/build.1916360134.tar --> /var/lib/minikube/build/build.1916360134.tar (3072 bytes)
I0420 00:13:51.816032   93866 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1916360134
I0420 00:13:51.832495   93866 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1916360134 -xf /var/lib/minikube/build/build.1916360134.tar
I0420 00:13:51.845796   93866 crio.go:315] Building image: /var/lib/minikube/build/build.1916360134
I0420 00:13:51.845881   93866 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-238176 /var/lib/minikube/build/build.1916360134 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0420 00:13:54.222240   93866 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-238176 /var/lib/minikube/build/build.1916360134 --cgroup-manager=cgroupfs: (2.376313327s)
I0420 00:13:54.222349   93866 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1916360134
I0420 00:13:54.235174   93866 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1916360134.tar
I0420 00:13:54.245353   93866 build_images.go:217] Built localhost/my-image:functional-238176 from /tmp/build.1916360134.tar
I0420 00:13:54.245389   93866 build_images.go:133] succeeded building to: functional-238176
I0420 00:13:54.245394   93866 build_images.go:134] failed building to: 
I0420 00:13:54.245414   93866 main.go:141] libmachine: Making call to close driver server
I0420 00:13:54.245434   93866 main.go:141] libmachine: (functional-238176) Calling .Close
I0420 00:13:54.245746   93866 main.go:141] libmachine: Successfully made call to close driver server
I0420 00:13:54.245783   93866 main.go:141] libmachine: Making call to close connection to plugin binary
I0420 00:13:54.245793   93866 main.go:141] libmachine: Making call to close driver server
I0420 00:13:54.245801   93866 main.go:141] libmachine: (functional-238176) Calling .Close
I0420 00:13:54.246030   93866 main.go:141] libmachine: Successfully made call to close driver server
I0420 00:13:54.246048   93866 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.201506391s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-238176
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 image load --daemon gcr.io/google-containers/addon-resizer:functional-238176 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-238176 image load --daemon gcr.io/google-containers/addon-resizer:functional-238176 --alsologtostderr: (6.821058737s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 image load --daemon gcr.io/google-containers/addon-resizer:functional-238176 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-238176 image load --daemon gcr.io/google-containers/addon-resizer:functional-238176 --alsologtostderr: (2.697979322s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.072601405s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-238176
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 image load --daemon gcr.io/google-containers/addon-resizer:functional-238176 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-238176 image load --daemon gcr.io/google-containers/addon-resizer:functional-238176 --alsologtostderr: (7.577293497s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 image rm gcr.io/google-containers/addon-resizer:functional-238176 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-238176
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-238176 image save --daemon gcr.io/google-containers/addon-resizer:functional-238176 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-238176
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.30s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-238176
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-238176
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-238176
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (204.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-371738 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0420 00:15:27.814704   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 00:15:55.501525   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-371738 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m23.59414195s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (204.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-371738 -- rollout status deployment/busybox: (2.272852632s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- exec busybox-fc5497c4f-bqndp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- exec busybox-fc5497c4f-f8cxz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- exec busybox-fc5497c4f-j7g5h -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- exec busybox-fc5497c4f-bqndp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- exec busybox-fc5497c4f-f8cxz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- exec busybox-fc5497c4f-j7g5h -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- exec busybox-fc5497c4f-bqndp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- exec busybox-fc5497c4f-f8cxz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- exec busybox-fc5497c4f-j7g5h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- exec busybox-fc5497c4f-bqndp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- exec busybox-fc5497c4f-bqndp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- exec busybox-fc5497c4f-f8cxz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- exec busybox-fc5497c4f-f8cxz -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- exec busybox-fc5497c4f-j7g5h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-371738 -- exec busybox-fc5497c4f-j7g5h -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-371738 -v=7 --alsologtostderr
E0420 00:18:11.657561   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:18:11.662903   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:18:11.673234   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:18:11.693546   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:18:11.733856   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:18:11.814166   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:18:11.974509   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:18:12.294667   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:18:12.935539   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:18:14.216514   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:18:16.776940   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:18:21.897615   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-371738 -v=7 --alsologtostderr: (48.239314841s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-371738 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp testdata/cp-test.txt ha-371738:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp ha-371738:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3122242891/001/cp-test_ha-371738.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp ha-371738:/home/docker/cp-test.txt ha-371738-m02:/home/docker/cp-test_ha-371738_ha-371738-m02.txt
E0420 00:18:32.138135   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m02 "sudo cat /home/docker/cp-test_ha-371738_ha-371738-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp ha-371738:/home/docker/cp-test.txt ha-371738-m03:/home/docker/cp-test_ha-371738_ha-371738-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m03 "sudo cat /home/docker/cp-test_ha-371738_ha-371738-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp ha-371738:/home/docker/cp-test.txt ha-371738-m04:/home/docker/cp-test_ha-371738_ha-371738-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m04 "sudo cat /home/docker/cp-test_ha-371738_ha-371738-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp testdata/cp-test.txt ha-371738-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp ha-371738-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3122242891/001/cp-test_ha-371738-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp ha-371738-m02:/home/docker/cp-test.txt ha-371738:/home/docker/cp-test_ha-371738-m02_ha-371738.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738 "sudo cat /home/docker/cp-test_ha-371738-m02_ha-371738.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp ha-371738-m02:/home/docker/cp-test.txt ha-371738-m03:/home/docker/cp-test_ha-371738-m02_ha-371738-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m03 "sudo cat /home/docker/cp-test_ha-371738-m02_ha-371738-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp ha-371738-m02:/home/docker/cp-test.txt ha-371738-m04:/home/docker/cp-test_ha-371738-m02_ha-371738-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m04 "sudo cat /home/docker/cp-test_ha-371738-m02_ha-371738-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp testdata/cp-test.txt ha-371738-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp ha-371738-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3122242891/001/cp-test_ha-371738-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp ha-371738-m03:/home/docker/cp-test.txt ha-371738:/home/docker/cp-test_ha-371738-m03_ha-371738.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738 "sudo cat /home/docker/cp-test_ha-371738-m03_ha-371738.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp ha-371738-m03:/home/docker/cp-test.txt ha-371738-m02:/home/docker/cp-test_ha-371738-m03_ha-371738-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m02 "sudo cat /home/docker/cp-test_ha-371738-m03_ha-371738-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp ha-371738-m03:/home/docker/cp-test.txt ha-371738-m04:/home/docker/cp-test_ha-371738-m03_ha-371738-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m04 "sudo cat /home/docker/cp-test_ha-371738-m03_ha-371738-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp testdata/cp-test.txt ha-371738-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3122242891/001/cp-test_ha-371738-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt ha-371738:/home/docker/cp-test_ha-371738-m04_ha-371738.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738 "sudo cat /home/docker/cp-test_ha-371738-m04_ha-371738.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt ha-371738-m02:/home/docker/cp-test_ha-371738-m04_ha-371738-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m02 "sudo cat /home/docker/cp-test_ha-371738-m04_ha-371738-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 cp ha-371738-m04:/home/docker/cp-test.txt ha-371738-m03:/home/docker/cp-test_ha-371738-m04_ha-371738-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 ssh -n ha-371738-m03 "sudo cat /home/docker/cp-test_ha-371738-m04_ha-371738-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.511923523s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-371738 node delete m03 -v=7 --alsologtostderr: (16.764243901s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (383.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-371738 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0420 00:33:11.657769   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:34:34.703089   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:35:27.814637   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-371738 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (6m22.503222283s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (383.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-371738 --control-plane -v=7 --alsologtostderr
E0420 00:38:11.657577   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-371738 --control-plane -v=7 --alsologtostderr: (1m12.790073051s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-371738 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.58s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.99s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-487745 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-487745 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (59.992550325s)
--- PASS: TestJSONOutput/start/Command (59.99s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-487745 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-487745 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.43s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-487745 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-487745 --output=json --user=testUser: (7.428696899s)
--- PASS: TestJSONOutput/stop/Command (7.43s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-677002 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-677002 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.806603ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c3df0c5d-4579-4dd9-b887-98773ce2df97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-677002] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"46a30646-ce84-4c20-9b64-ca591bfa17f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18703"}}
	{"specversion":"1.0","id":"600bffe7-8a79-4577-95b8-c6a010603738","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"453af98a-b9de-479f-acab-616d7e5ba8d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig"}}
	{"specversion":"1.0","id":"d3758c0f-7ab0-45d0-8149-227d48f202d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube"}}
	{"specversion":"1.0","id":"853c21c0-4abe-4b33-ba1d-0b97926370d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8df13c82-8d53-41a6-aa21-9581cc2bf5c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"00af88d1-72fe-48bd-a362-d0fce3d65f19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-677002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-677002
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (93.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-391001 --driver=kvm2  --container-runtime=crio
E0420 00:40:27.814907   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-391001 --driver=kvm2  --container-runtime=crio: (44.444525986s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-393424 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-393424 --driver=kvm2  --container-runtime=crio: (46.015528619s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-391001
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-393424
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-393424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-393424
helpers_test.go:175: Cleaning up "first-391001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-391001
--- PASS: TestMinikubeProfile (93.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-039892 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-039892 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.942812972s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-039892 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-039892 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-054355 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-054355 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.591877723s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-054355 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-054355 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-039892 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-054355 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-054355 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-054355
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-054355: (1.396333935s)
--- PASS: TestMountStart/serial/Stop (1.40s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.91s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-054355
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-054355: (20.911380106s)
--- PASS: TestMountStart/serial/RestartStopped (21.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-054355 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-054355 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-059001 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0420 00:43:11.657488   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
E0420 00:43:30.863433   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-059001 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m48.867842857s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-059001 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-059001 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-059001 -- rollout status deployment/busybox: (2.575448303s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-059001 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-059001 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-059001 -- exec busybox-fc5497c4f-h5n42 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-059001 -- exec busybox-fc5497c4f-xlthm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-059001 -- exec busybox-fc5497c4f-h5n42 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-059001 -- exec busybox-fc5497c4f-xlthm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-059001 -- exec busybox-fc5497c4f-h5n42 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-059001 -- exec busybox-fc5497c4f-xlthm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-059001 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-059001 -- exec busybox-fc5497c4f-h5n42 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-059001 -- exec busybox-fc5497c4f-h5n42 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-059001 -- exec busybox-fc5497c4f-xlthm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-059001 -- exec busybox-fc5497c4f-xlthm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-059001 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-059001 -v 3 --alsologtostderr: (41.522255907s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.11s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-059001 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 cp testdata/cp-test.txt multinode-059001:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 cp multinode-059001:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2465559633/001/cp-test_multinode-059001.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 cp multinode-059001:/home/docker/cp-test.txt multinode-059001-m02:/home/docker/cp-test_multinode-059001_multinode-059001-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001-m02 "sudo cat /home/docker/cp-test_multinode-059001_multinode-059001-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 cp multinode-059001:/home/docker/cp-test.txt multinode-059001-m03:/home/docker/cp-test_multinode-059001_multinode-059001-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001-m03 "sudo cat /home/docker/cp-test_multinode-059001_multinode-059001-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 cp testdata/cp-test.txt multinode-059001-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 cp multinode-059001-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2465559633/001/cp-test_multinode-059001-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 cp multinode-059001-m02:/home/docker/cp-test.txt multinode-059001:/home/docker/cp-test_multinode-059001-m02_multinode-059001.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001 "sudo cat /home/docker/cp-test_multinode-059001-m02_multinode-059001.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 cp multinode-059001-m02:/home/docker/cp-test.txt multinode-059001-m03:/home/docker/cp-test_multinode-059001-m02_multinode-059001-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001-m03 "sudo cat /home/docker/cp-test_multinode-059001-m02_multinode-059001-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 cp testdata/cp-test.txt multinode-059001-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 cp multinode-059001-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2465559633/001/cp-test_multinode-059001-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 cp multinode-059001-m03:/home/docker/cp-test.txt multinode-059001:/home/docker/cp-test_multinode-059001-m03_multinode-059001.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001 "sudo cat /home/docker/cp-test_multinode-059001-m03_multinode-059001.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 cp multinode-059001-m03:/home/docker/cp-test.txt multinode-059001-m02:/home/docker/cp-test_multinode-059001-m03_multinode-059001-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 ssh -n multinode-059001-m02 "sudo cat /home/docker/cp-test_multinode-059001-m03_multinode-059001-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-059001 node stop m03: (2.296861368s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-059001 status: exit status 7 (430.67584ms)

                                                
                                                
-- stdout --
	multinode-059001
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-059001-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-059001-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-059001 status --alsologtostderr: exit status 7 (430.105966ms)

                                                
                                                
-- stdout --
	multinode-059001
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-059001-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-059001-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 00:45:26.297433  111677 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:45:26.297731  111677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:45:26.297743  111677 out.go:304] Setting ErrFile to fd 2...
	I0420 00:45:26.297748  111677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:45:26.297950  111677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 00:45:26.298136  111677 out.go:298] Setting JSON to false
	I0420 00:45:26.298170  111677 mustload.go:65] Loading cluster: multinode-059001
	I0420 00:45:26.298217  111677 notify.go:220] Checking for updates...
	I0420 00:45:26.299970  111677 config.go:182] Loaded profile config "multinode-059001": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:45:26.299995  111677 status.go:255] checking status of multinode-059001 ...
	I0420 00:45:26.300556  111677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:45:26.300601  111677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:45:26.315574  111677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42711
	I0420 00:45:26.316018  111677 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:45:26.316534  111677 main.go:141] libmachine: Using API Version  1
	I0420 00:45:26.316559  111677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:45:26.316968  111677 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:45:26.317201  111677 main.go:141] libmachine: (multinode-059001) Calling .GetState
	I0420 00:45:26.318634  111677 status.go:330] multinode-059001 host status = "Running" (err=<nil>)
	I0420 00:45:26.318656  111677 host.go:66] Checking if "multinode-059001" exists ...
	I0420 00:45:26.319003  111677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:45:26.319048  111677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:45:26.333883  111677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32909
	I0420 00:45:26.334339  111677 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:45:26.334763  111677 main.go:141] libmachine: Using API Version  1
	I0420 00:45:26.334787  111677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:45:26.335083  111677 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:45:26.335225  111677 main.go:141] libmachine: (multinode-059001) Calling .GetIP
	I0420 00:45:26.337921  111677 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:45:26.338277  111677 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:45:26.338306  111677 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:45:26.338412  111677 host.go:66] Checking if "multinode-059001" exists ...
	I0420 00:45:26.338681  111677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:45:26.338720  111677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:45:26.353095  111677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42091
	I0420 00:45:26.353485  111677 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:45:26.353852  111677 main.go:141] libmachine: Using API Version  1
	I0420 00:45:26.353875  111677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:45:26.354201  111677 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:45:26.354391  111677 main.go:141] libmachine: (multinode-059001) Calling .DriverName
	I0420 00:45:26.354566  111677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:45:26.354589  111677 main.go:141] libmachine: (multinode-059001) Calling .GetSSHHostname
	I0420 00:45:26.357005  111677 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:45:26.357428  111677 main.go:141] libmachine: (multinode-059001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bf:5f", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:42:55 +0000 UTC Type:0 Mac:52:54:00:98:bf:5f Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-059001 Clientid:01:52:54:00:98:bf:5f}
	I0420 00:45:26.357453  111677 main.go:141] libmachine: (multinode-059001) DBG | domain multinode-059001 has defined IP address 192.168.39.200 and MAC address 52:54:00:98:bf:5f in network mk-multinode-059001
	I0420 00:45:26.357569  111677 main.go:141] libmachine: (multinode-059001) Calling .GetSSHPort
	I0420 00:45:26.357728  111677 main.go:141] libmachine: (multinode-059001) Calling .GetSSHKeyPath
	I0420 00:45:26.357859  111677 main.go:141] libmachine: (multinode-059001) Calling .GetSSHUsername
	I0420 00:45:26.357965  111677 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/multinode-059001/id_rsa Username:docker}
	I0420 00:45:26.442234  111677 ssh_runner.go:195] Run: systemctl --version
	I0420 00:45:26.449339  111677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:45:26.466092  111677 kubeconfig.go:125] found "multinode-059001" server: "https://192.168.39.200:8443"
	I0420 00:45:26.466131  111677 api_server.go:166] Checking apiserver status ...
	I0420 00:45:26.466169  111677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:45:26.481114  111677 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1172/cgroup
	W0420 00:45:26.492328  111677 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1172/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0420 00:45:26.492378  111677 ssh_runner.go:195] Run: ls
	I0420 00:45:26.497639  111677 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0420 00:45:26.501810  111677 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0420 00:45:26.501832  111677 status.go:422] multinode-059001 apiserver status = Running (err=<nil>)
	I0420 00:45:26.501844  111677 status.go:257] multinode-059001 status: &{Name:multinode-059001 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:45:26.501867  111677 status.go:255] checking status of multinode-059001-m02 ...
	I0420 00:45:26.502165  111677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:45:26.502209  111677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:45:26.517197  111677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38219
	I0420 00:45:26.517644  111677 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:45:26.518125  111677 main.go:141] libmachine: Using API Version  1
	I0420 00:45:26.518149  111677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:45:26.518463  111677 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:45:26.518651  111677 main.go:141] libmachine: (multinode-059001-m02) Calling .GetState
	I0420 00:45:26.520064  111677 status.go:330] multinode-059001-m02 host status = "Running" (err=<nil>)
	I0420 00:45:26.520081  111677 host.go:66] Checking if "multinode-059001-m02" exists ...
	I0420 00:45:26.520338  111677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:45:26.520375  111677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:45:26.535098  111677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33075
	I0420 00:45:26.535506  111677 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:45:26.535946  111677 main.go:141] libmachine: Using API Version  1
	I0420 00:45:26.535967  111677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:45:26.536308  111677 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:45:26.536478  111677 main.go:141] libmachine: (multinode-059001-m02) Calling .GetIP
	I0420 00:45:26.539005  111677 main.go:141] libmachine: (multinode-059001-m02) DBG | domain multinode-059001-m02 has defined MAC address 52:54:00:3a:54:f9 in network mk-multinode-059001
	I0420 00:45:26.539410  111677 main.go:141] libmachine: (multinode-059001-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:54:f9", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:44:00 +0000 UTC Type:0 Mac:52:54:00:3a:54:f9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:multinode-059001-m02 Clientid:01:52:54:00:3a:54:f9}
	I0420 00:45:26.539449  111677 main.go:141] libmachine: (multinode-059001-m02) DBG | domain multinode-059001-m02 has defined IP address 192.168.39.91 and MAC address 52:54:00:3a:54:f9 in network mk-multinode-059001
	I0420 00:45:26.539551  111677 host.go:66] Checking if "multinode-059001-m02" exists ...
	I0420 00:45:26.539836  111677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:45:26.539868  111677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:45:26.554156  111677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37919
	I0420 00:45:26.554576  111677 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:45:26.555013  111677 main.go:141] libmachine: Using API Version  1
	I0420 00:45:26.555036  111677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:45:26.555396  111677 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:45:26.555585  111677 main.go:141] libmachine: (multinode-059001-m02) Calling .DriverName
	I0420 00:45:26.555792  111677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:45:26.555815  111677 main.go:141] libmachine: (multinode-059001-m02) Calling .GetSSHHostname
	I0420 00:45:26.558508  111677 main.go:141] libmachine: (multinode-059001-m02) DBG | domain multinode-059001-m02 has defined MAC address 52:54:00:3a:54:f9 in network mk-multinode-059001
	I0420 00:45:26.558941  111677 main.go:141] libmachine: (multinode-059001-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:54:f9", ip: ""} in network mk-multinode-059001: {Iface:virbr1 ExpiryTime:2024-04-20 01:44:00 +0000 UTC Type:0 Mac:52:54:00:3a:54:f9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:multinode-059001-m02 Clientid:01:52:54:00:3a:54:f9}
	I0420 00:45:26.558969  111677 main.go:141] libmachine: (multinode-059001-m02) DBG | domain multinode-059001-m02 has defined IP address 192.168.39.91 and MAC address 52:54:00:3a:54:f9 in network mk-multinode-059001
	I0420 00:45:26.559146  111677 main.go:141] libmachine: (multinode-059001-m02) Calling .GetSSHPort
	I0420 00:45:26.559319  111677 main.go:141] libmachine: (multinode-059001-m02) Calling .GetSSHKeyPath
	I0420 00:45:26.559488  111677 main.go:141] libmachine: (multinode-059001-m02) Calling .GetSSHUsername
	I0420 00:45:26.559625  111677 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18703-76456/.minikube/machines/multinode-059001-m02/id_rsa Username:docker}
	I0420 00:45:26.638192  111677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:45:26.653644  111677 status.go:257] multinode-059001-m02 status: &{Name:multinode-059001-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0420 00:45:26.653673  111677 status.go:255] checking status of multinode-059001-m03 ...
	I0420 00:45:26.654062  111677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0420 00:45:26.654117  111677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0420 00:45:26.669137  111677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I0420 00:45:26.669649  111677 main.go:141] libmachine: () Calling .GetVersion
	I0420 00:45:26.670233  111677 main.go:141] libmachine: Using API Version  1
	I0420 00:45:26.670263  111677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0420 00:45:26.670554  111677 main.go:141] libmachine: () Calling .GetMachineName
	I0420 00:45:26.670730  111677 main.go:141] libmachine: (multinode-059001-m03) Calling .GetState
	I0420 00:45:26.672127  111677 status.go:330] multinode-059001-m03 host status = "Stopped" (err=<nil>)
	I0420 00:45:26.672142  111677 status.go:343] host is not running, skipping remaining checks
	I0420 00:45:26.672150  111677 status.go:257] multinode-059001-m03 status: &{Name:multinode-059001-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 node start m03 -v=7 --alsologtostderr
E0420 00:45:27.815568   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-059001 node start m03 -v=7 --alsologtostderr: (26.890316843s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-059001 node delete m03: (1.864258115s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (172.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-059001 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0420 00:55:27.814528   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-059001 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m51.928345243s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-059001 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (172.47s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-059001
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-059001-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-059001-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (77.281658ms)

                                                
                                                
-- stdout --
	* [multinode-059001-m02] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-059001-m02' is duplicated with machine name 'multinode-059001-m02' in profile 'multinode-059001'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-059001-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-059001-m03 --driver=kvm2  --container-runtime=crio: (45.886925624s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-059001
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-059001: exit status 80 (237.063556ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-059001 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-059001-m03 already exists in multinode-059001-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-059001-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.24s)

                                                
                                    
x
+
TestScheduledStopUnix (116.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-519361 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-519361 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.395307114s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-519361 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-519361 -n scheduled-stop-519361
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-519361 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-519361 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-519361 -n scheduled-stop-519361
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-519361
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-519361 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0420 01:03:11.657693   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-519361
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-519361: exit status 7 (78.152011ms)

                                                
                                                
-- stdout --
	scheduled-stop-519361
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-519361 -n scheduled-stop-519361
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-519361 -n scheduled-stop-519361: exit status 7 (75.520384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-519361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-519361
--- PASS: TestScheduledStopUnix (116.16s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (176.65s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1350630530 start -p running-upgrade-981367 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1350630530 start -p running-upgrade-981367 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m28.741242509s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-981367 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0420 01:07:54.706504   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-981367 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m26.278207516s)
helpers_test.go:175: Cleaning up "running-upgrade-981367" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-981367
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-981367: (1.164520893s)
--- PASS: TestRunningBinaryUpgrade (176.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-254901 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-254901 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (100.58257ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-254901] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-254901 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-254901 --driver=kvm2  --container-runtime=crio: (1m36.287197186s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-254901 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-831611 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-831611 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (116.442308ms)

                                                
                                                
-- stdout --
	* [false-831611] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 01:03:44.401207  119272 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:03:44.401343  119272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:03:44.401355  119272 out.go:304] Setting ErrFile to fd 2...
	I0420 01:03:44.401370  119272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:03:44.401572  119272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-76456/.minikube/bin
	I0420 01:03:44.402190  119272 out.go:298] Setting JSON to false
	I0420 01:03:44.403042  119272 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":13571,"bootTime":1713561453,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0420 01:03:44.403107  119272 start.go:139] virtualization: kvm guest
	I0420 01:03:44.405959  119272 out.go:177] * [false-831611] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0420 01:03:44.407354  119272 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:03:44.408559  119272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:03:44.407359  119272 notify.go:220] Checking for updates...
	I0420 01:03:44.411222  119272 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-76456/kubeconfig
	I0420 01:03:44.412410  119272 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-76456/.minikube
	I0420 01:03:44.413843  119272 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0420 01:03:44.415216  119272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:03:44.416976  119272 config.go:182] Loaded profile config "NoKubernetes-254901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:03:44.417067  119272 config.go:182] Loaded profile config "force-systemd-env-339159": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:03:44.417148  119272 config.go:182] Loaded profile config "offline-crio-255012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:03:44.417239  119272 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:03:44.452424  119272 out.go:177] * Using the kvm2 driver based on user configuration
	I0420 01:03:44.453757  119272 start.go:297] selected driver: kvm2
	I0420 01:03:44.453774  119272 start.go:901] validating driver "kvm2" against <nil>
	I0420 01:03:44.453784  119272 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:03:44.455540  119272 out.go:177] 
	W0420 01:03:44.456726  119272 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0420 01:03:44.457895  119272 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-831611 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-831611

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-831611

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-831611

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-831611

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-831611

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-831611

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-831611

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-831611

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-831611

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-831611

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-831611

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-831611" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-831611" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-831611

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831611"

                                                
                                                
----------------------- debugLogs end: false-831611 [took: 3.105409269s] --------------------------------
helpers_test.go:175: Cleaning up "false-831611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-831611
--- PASS: TestNetworkPlugins/group/false (3.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (43.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-254901 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0420 01:05:27.814984   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-254901 --no-kubernetes --driver=kvm2  --container-runtime=crio: (41.947370398s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-254901 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-254901 status -o json: exit status 2 (268.626096ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-254901","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-254901
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (43.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (50.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-254901 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-254901 --no-kubernetes --driver=kvm2  --container-runtime=crio: (50.235715101s)
--- PASS: TestNoKubernetes/serial/Start (50.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-254901 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-254901 "sudo systemctl is-active --quiet service kubelet": exit status 1 (213.068377ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-254901
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-254901: (1.47838709s)
--- PASS: TestNoKubernetes/serial/Stop (1.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (68.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-254901 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-254901 --driver=kvm2  --container-runtime=crio: (1m8.226233889s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (68.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-254901 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-254901 "sudo systemctl is-active --quiet service kubelet": exit status 1 (221.516752ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (102.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2067904582 start -p stopped-upgrade-872162 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0420 01:08:11.658337   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2067904582 start -p stopped-upgrade-872162 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (54.592160456s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2067904582 -p stopped-upgrade-872162 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2067904582 -p stopped-upgrade-872162 stop: (2.123492292s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-872162 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-872162 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.729394223s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (102.45s)

                                                
                                    
x
+
TestPause/serial/Start (112.09s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-680144 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-680144 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m52.093044053s)
--- PASS: TestPause/serial/Start (112.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (124.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-831611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-831611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m4.055486489s)
--- PASS: TestNetworkPlugins/group/auto/Start (124.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-872162
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-872162: (1.090288735s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (125.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-831611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0420 01:10:27.814646   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-831611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (2m5.915918657s)
--- PASS: TestNetworkPlugins/group/bridge/Start (125.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-831611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-831611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vpgzk" [fac53341-33ef-401f-ac5e-31cff215c120] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-vpgzk" [fac53341-33ef-401f-ac5e-31cff215c120] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004081527s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-831611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-831611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-831611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-831611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-831611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-sng2s" [ad158995-33a2-4692-9313-5d6d7678b0ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-sng2s" [ad158995-33a2-4692-9313-5d6d7678b0ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004106057s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (115.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-831611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-831611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m55.632482647s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (115.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (110.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-831611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-831611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m50.045516578s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (110.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-831611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-831611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-831611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (132.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-831611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-831611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m12.554628989s)
--- PASS: TestNetworkPlugins/group/flannel/Start (132.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-n7m4d" [61e46fed-c489-4040-8e44-3064d4d13ebe] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006336815s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-831611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-831611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bcf7l" [80a05535-125f-45b9-a0e7-d30365e3d450] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bcf7l" [80a05535-125f-45b9-a0e7-d30365e3d450] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.006274905s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-831611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-831611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rdncl" [01346431-b816-422c-b2d5-86c758acd98b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-rdncl" [01346431-b816-422c-b2d5-86c758acd98b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004636909s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-831611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-831611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-831611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (99.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-831611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-831611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m39.760032708s)
--- PASS: TestNetworkPlugins/group/calico/Start (99.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-831611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-831611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-831611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (106.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-831611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-831611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m46.84921911s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (106.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-cqwsb" [520e8b1f-778e-4a0d-9ec0-0d39a430f53d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004933025s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-831611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-831611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xb76p" [869ca1da-95d4-46e6-8df3-2670fc6a8737] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-xb76p" [869ca1da-95d4-46e6-8df3-2670fc6a8737] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004925026s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-831611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-831611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-831611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (153.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-338118 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0420 01:15:27.815158   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-338118 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (2m33.633461129s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (153.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-24vpf" [edd228e6-47dd-4d34-9274-c751fdc0d190] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.043094096s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-831611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-831611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-l2vgg" [4a617789-1c1c-4d2d-9691-9e407ed29f42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-l2vgg" [4a617789-1c1c-4d2d-9691-9e407ed29f42] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.00470235s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-831611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-831611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-831611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-831611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-831611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cmjl5" [46916a7d-bfca-45b1-bc81-ae7352973902] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-cmjl5" [46916a7d-bfca-45b1-bc81-ae7352973902] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004799336s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (106.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-269507 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-269507 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (1m46.407085529s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (106.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-831611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-831611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-831611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)
E0420 01:45:27.814927   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-907988 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0420 01:16:50.589009   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/auto-831611/client.crt: no such file or directory
E0420 01:16:50.864592   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/addons-903502/client.crt: no such file or directory
E0420 01:16:54.409752   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:16:54.415040   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:16:54.425293   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:16:54.445561   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:16:54.486156   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:16:54.566479   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:16:54.726881   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:16:55.048012   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:16:55.688744   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:16:57.060985   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:16:59.622185   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:17:04.743111   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:17:11.069722   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/auto-831611/client.crt: no such file or directory
E0420 01:17:14.983398   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
E0420 01:17:35.463763   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-907988 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (1m14.331289117s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-338118 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9df5c73d-3fba-4c2c-a989-ec3bdc3e78bb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9df5c73d-3fba-4c2c-a989-ec3bdc3e78bb] Running
E0420 01:17:52.030814   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/auto-831611/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004245029s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-338118 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-338118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-338118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.107201905s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-338118 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-907988 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c851f419-7c45-47d0-81ca-8264568bdaee] Pending
helpers_test.go:344: "busybox" [c851f419-7c45-47d0-81ca-8264568bdaee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c851f419-7c45-47d0-81ca-8264568bdaee] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003904325s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-907988 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-907988 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-907988 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-269507 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [732d2c00-1b6b-4d99-b7ad-16408ed8a3dd] Pending
E0420 01:18:11.657461   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/functional-238176/client.crt: no such file or directory
helpers_test.go:344: "busybox" [732d2c00-1b6b-4d99-b7ad-16408ed8a3dd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [732d2c00-1b6b-4d99-b7ad-16408ed8a3dd] Running
E0420 01:18:16.424171   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/bridge-831611/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004643564s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-269507 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-269507 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-269507 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (742.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-338118 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-338118 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (12m22.178865185s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-338118 -n no-preload-338118
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (742.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (608.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-907988 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-907988 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (10m8.646598895s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-907988 -n default-k8s-diff-port-907988
E0420 01:30:47.874448   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (608.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (623.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-269507 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0420 01:20:52.995245   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
E0420 01:20:58.116073   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
E0420 01:20:58.498565   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
E0420 01:21:08.356347   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
E0420 01:21:12.218984   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
E0420 01:21:12.224249   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
E0420 01:21:12.234452   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
E0420 01:21:12.254640   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
E0420 01:21:12.294949   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
E0420 01:21:12.375339   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
E0420 01:21:12.535745   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
E0420 01:21:12.856327   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
E0420 01:21:13.497358   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
E0420 01:21:14.778172   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
E0420 01:21:17.338672   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
E0420 01:21:22.459473   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
E0420 01:21:28.837439   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
E0420 01:21:30.106519   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/auto-831611/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-269507 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (10m22.962531406s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-269507 -n embed-certs-269507
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (623.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-564860 --alsologtostderr -v=3
E0420 01:21:32.700457   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-564860 --alsologtostderr -v=3: (1.527487677s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-564860 -n old-k8s-version-564860
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-564860 -n old-k8s-version-564860: exit status 7 (74.833378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-564860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-776287 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0420 01:44:36.575845   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/flannel-831611/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-776287 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (1m0.616368163s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-776287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-776287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.211682233s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-776287 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-776287 --alsologtostderr -v=3: (11.382289243s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-776287 -n newest-cni-776287
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-776287 -n newest-cni-776287: exit status 7 (83.526441ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-776287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-776287 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0420 01:45:47.874077   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/calico-831611/client.crt: no such file or directory
E0420 01:46:12.218603   83742 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-76456/.minikube/profiles/custom-flannel-831611/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-776287 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (38.374375199s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-776287 -n newest-cni-776287
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-776287 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-776287 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-776287 -n newest-cni-776287
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-776287 -n newest-cni-776287: exit status 2 (267.444795ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-776287 -n newest-cni-776287
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-776287 -n newest-cni-776287: exit status 2 (273.49118ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-776287 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-776287 -n newest-cni-776287
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-776287 -n newest-cni-776287
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.27s)

                                                
                                    

Test skip (36/311)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.0/cached-images 0
15 TestDownloadOnly/v1.30.0/binaries 0
16 TestDownloadOnly/v1.30.0/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
114 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.02
115 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
116 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
117 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
118 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
253 TestNetworkPlugins/group/kubenet 3.28
262 TestNetworkPlugins/group/cilium 3.54
277 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-831611 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-831611

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-831611

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-831611

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-831611

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-831611

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-831611

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-831611

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-831611

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-831611

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-831611

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-831611

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-831611" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-831611" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-831611

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831611"

                                                
                                                
----------------------- debugLogs end: kubenet-831611 [took: 3.127984936s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-831611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-831611
--- SKIP: TestNetworkPlugins/group/kubenet (3.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-831611 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-831611

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-831611

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-831611

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-831611

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-831611

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-831611

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-831611

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-831611

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-831611

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-831611

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-831611

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-831611" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-831611

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-831611

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-831611

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-831611

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-831611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-831611" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-831611

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-831611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831611"

                                                
                                                
----------------------- debugLogs end: cilium-831611 [took: 3.381865739s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-831611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-831611
--- SKIP: TestNetworkPlugins/group/cilium (3.54s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-172352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-172352
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard